2026-03-29 00:00:07.796720 | Job console starting 2026-03-29 00:00:07.823297 | Updating git repos 2026-03-29 00:00:07.948204 | Cloning repos into workspace 2026-03-29 00:00:08.311525 | Restoring repo states 2026-03-29 00:00:08.345437 | Merging changes 2026-03-29 00:00:08.345456 | Checking out repos 2026-03-29 00:00:09.037258 | Preparing playbooks 2026-03-29 00:00:10.271043 | Running Ansible setup 2026-03-29 00:00:18.327596 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-29 00:00:19.865337 | 2026-03-29 00:00:19.865475 | PLAY [Base pre] 2026-03-29 00:00:19.933692 | 2026-03-29 00:00:19.933836 | TASK [Setup log path fact] 2026-03-29 00:00:19.974828 | orchestrator | ok 2026-03-29 00:00:20.060461 | 2026-03-29 00:00:20.060623 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-29 00:00:20.128761 | orchestrator | ok 2026-03-29 00:00:20.186470 | 2026-03-29 00:00:20.186669 | TASK [emit-job-header : Print job information] 2026-03-29 00:00:20.228619 | # Job Information 2026-03-29 00:00:20.228829 | Ansible Version: 2.16.14 2026-03-29 00:00:20.228864 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-29 00:00:20.228903 | Pipeline: periodic-midnight 2026-03-29 00:00:20.228940 | Executor: 521e9411259a 2026-03-29 00:00:20.228961 | Triggered by: https://github.com/osism/testbed 2026-03-29 00:00:20.228983 | Event ID: 8728361d0a6a491ab345cc1284af2839 2026-03-29 00:00:20.247701 | 2026-03-29 00:00:20.247818 | LOOP [emit-job-header : Print node information] 2026-03-29 00:00:20.672029 | orchestrator | ok: 2026-03-29 00:00:20.672213 | orchestrator | # Node Information 2026-03-29 00:00:20.672251 | orchestrator | Inventory Hostname: orchestrator 2026-03-29 00:00:20.672278 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-29 00:00:20.672301 | orchestrator | Username: zuul-testbed05 2026-03-29 00:00:20.672323 | orchestrator | Distro: Debian 12.13 2026-03-29 00:00:20.672346 | orchestrator | Provider: static-testbed 2026-03-29 00:00:20.672368 | orchestrator | Region: 2026-03-29 00:00:20.672389 | orchestrator | Label: testbed-orchestrator 2026-03-29 00:00:20.672409 | orchestrator | Product Name: OpenStack Nova 2026-03-29 00:00:20.672429 | orchestrator | Interface IP: 81.163.193.140 2026-03-29 00:00:20.696847 | 2026-03-29 00:00:20.696990 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-29 00:00:22.040754 | orchestrator -> localhost | changed 2026-03-29 00:00:22.067841 | 2026-03-29 00:00:22.067994 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-29 00:00:24.672961 | orchestrator -> localhost | changed 2026-03-29 00:00:24.741336 | 2026-03-29 00:00:24.741464 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-29 00:00:25.706022 | orchestrator -> localhost | ok 2026-03-29 00:00:25.715253 | 2026-03-29 00:00:25.715368 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-29 00:00:25.802898 | orchestrator | ok 2026-03-29 00:00:25.866784 | orchestrator | included: /var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-29 00:00:25.930712 | 2026-03-29 00:00:25.930823 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-29 00:00:32.139415 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-29 00:00:32.139614 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/work/b6f2ba222d6b4c61a0a2e9d3c483dd72_id_rsa 2026-03-29 00:00:32.139651 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/work/b6f2ba222d6b4c61a0a2e9d3c483dd72_id_rsa.pub 2026-03-29 00:00:32.139677 | orchestrator -> localhost | The key fingerprint is: 2026-03-29 00:00:32.139705 | orchestrator -> localhost | SHA256:OUER54BIic1gGtcHwGQZ3ziUGi/AIFBwGckIdvh/BQw zuul-build-sshkey 2026-03-29 00:00:32.139728 | orchestrator -> localhost | The key's randomart image is: 2026-03-29 00:00:32.139762 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-29 00:00:32.139784 | orchestrator -> localhost | |%O&/+Eo.=o. | 2026-03-29 00:00:32.139805 | orchestrator -> localhost | |=XXoBoo+ + | 2026-03-29 00:00:32.139825 | orchestrator -> localhost | |...++.. o . | 2026-03-29 00:00:32.139844 | orchestrator -> localhost | | o... + | 2026-03-29 00:00:32.139863 | orchestrator -> localhost | | .. S | 2026-03-29 00:00:32.139914 | orchestrator -> localhost | | . . . | 2026-03-29 00:00:32.139940 | orchestrator -> localhost | | . | 2026-03-29 00:00:32.139961 | orchestrator -> localhost | | | 2026-03-29 00:00:32.139982 | orchestrator -> localhost | | | 2026-03-29 00:00:32.140001 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-29 00:00:32.140051 | orchestrator -> localhost | ok: Runtime: 0:00:04.195374 2026-03-29 00:00:32.147058 | 2026-03-29 00:00:32.147168 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-29 00:00:32.195910 | orchestrator | ok 2026-03-29 00:00:32.216482 | orchestrator | included: /var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-29 00:00:32.247150 | 2026-03-29 00:00:32.247267 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-29 00:00:32.270488 | orchestrator | skipping: Conditional result was False 2026-03-29 00:00:32.288035 | 2026-03-29 00:00:32.288168 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-29 00:00:33.348322 | orchestrator | changed 2026-03-29 00:00:33.355350 | 2026-03-29 00:00:33.355435 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-29 00:00:33.653726 | orchestrator | ok 2026-03-29 00:00:33.662805 | 2026-03-29 00:00:33.679448 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-29 00:00:34.244119 | orchestrator | ok 2026-03-29 00:00:34.250675 | 2026-03-29 00:00:34.250772 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-29 00:00:34.778149 | orchestrator | ok 2026-03-29 00:00:34.787263 | 2026-03-29 00:00:34.787354 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-29 00:00:34.840465 | orchestrator | skipping: Conditional result was False 2026-03-29 00:00:34.845969 | 2026-03-29 00:00:34.846055 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-29 00:00:36.229309 | orchestrator -> localhost | changed 2026-03-29 00:00:36.244853 | 2026-03-29 00:00:36.244978 | TASK [add-build-sshkey : Add back temp key] 2026-03-29 00:00:37.094668 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/work/b6f2ba222d6b4c61a0a2e9d3c483dd72_id_rsa (zuul-build-sshkey) 2026-03-29 00:00:37.094861 | orchestrator -> localhost | ok: Runtime: 0:00:00.040676 2026-03-29 00:00:37.100736 | 2026-03-29 00:00:37.100819 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-29 00:00:37.844104 | orchestrator | ok 2026-03-29 00:00:37.852077 | 2026-03-29 00:00:37.852166 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-29 00:00:37.908496 | orchestrator | skipping: Conditional result was False 2026-03-29 00:00:38.030675 | 2026-03-29 00:00:38.030774 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-29 00:00:38.692811 | orchestrator | ok 2026-03-29 00:00:38.731605 | 2026-03-29 00:00:38.731728 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-29 00:00:38.760372 | orchestrator | ok 2026-03-29 00:00:38.811679 | 2026-03-29 00:00:38.811804 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-29 00:00:39.686366 | orchestrator -> localhost | ok 2026-03-29 00:00:39.693558 | 2026-03-29 00:00:39.693652 | TASK [validate-host : Collect information about the host] 2026-03-29 00:00:41.310781 | orchestrator | ok 2026-03-29 00:00:41.342262 | 2026-03-29 00:00:41.342399 | TASK [validate-host : Sanitize hostname] 2026-03-29 00:00:41.495074 | orchestrator | ok 2026-03-29 00:00:41.499742 | 2026-03-29 00:00:41.499830 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-29 00:00:43.169719 | orchestrator -> localhost | changed 2026-03-29 00:00:43.175425 | 2026-03-29 00:00:43.175516 | TASK [validate-host : Collect information about zuul worker] 2026-03-29 00:00:43.685923 | orchestrator | ok 2026-03-29 00:00:43.690366 | 2026-03-29 00:00:43.690451 | TASK [validate-host : Write out all zuul information for each host] 2026-03-29 00:00:45.032905 | orchestrator -> localhost | changed 2026-03-29 00:00:45.041421 | 2026-03-29 00:00:45.041511 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-29 00:00:45.356310 | orchestrator | ok 2026-03-29 00:00:45.361695 | 2026-03-29 00:00:45.361780 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-29 00:02:13.357704 | orchestrator | changed: 2026-03-29 00:02:13.359076 | orchestrator | .d..t...... src/ 2026-03-29 00:02:13.359163 | orchestrator | .d..t...... src/github.com/ 2026-03-29 00:02:13.359195 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-29 00:02:13.359221 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-29 00:02:13.359245 | orchestrator | RedHat.yml 2026-03-29 00:02:13.383591 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-29 00:02:13.383609 | orchestrator | RedHat.yml 2026-03-29 00:02:13.383662 | orchestrator | = 1.53.0"... 2026-03-29 00:02:27.766673 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-29 00:02:27.783091 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-29 00:02:27.949095 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-29 00:02:28.617527 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-29 00:02:28.814588 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-29 00:02:29.484835 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-29 00:02:29.645227 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-29 00:02:30.163071 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-29 00:02:30.163161 | orchestrator | 2026-03-29 00:02:30.163173 | orchestrator | Providers are signed by their developers. 2026-03-29 00:02:30.163181 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-29 00:02:30.163188 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-29 00:02:30.163198 | orchestrator | 2026-03-29 00:02:30.163205 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-29 00:02:30.163212 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-29 00:02:30.163230 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-29 00:02:30.163237 | orchestrator | you run "tofu init" in the future. 2026-03-29 00:02:30.163515 | orchestrator | 2026-03-29 00:02:30.163545 | orchestrator | OpenTofu has been successfully initialized! 2026-03-29 00:02:30.163561 | orchestrator | 2026-03-29 00:02:30.163566 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-29 00:02:30.163574 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-29 00:02:30.163578 | orchestrator | should now work. 2026-03-29 00:02:30.163582 | orchestrator | 2026-03-29 00:02:30.163586 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-29 00:02:30.163590 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-29 00:02:30.163595 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-29 00:02:30.347689 | orchestrator | Created and switched to workspace "ci"! 2026-03-29 00:02:30.347827 | orchestrator | 2026-03-29 00:02:30.347841 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-29 00:02:30.347851 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-29 00:02:30.347859 | orchestrator | for this configuration. 2026-03-29 00:02:31.113847 | orchestrator | ci.auto.tfvars 2026-03-29 00:02:31.925256 | orchestrator | default_custom.tf 2026-03-29 00:02:33.901199 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-29 00:02:34.451944 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-29 00:02:34.737794 | orchestrator | 2026-03-29 00:02:34.737925 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-29 00:02:34.737946 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-29 00:02:34.737960 | orchestrator | + create 2026-03-29 00:02:34.737972 | orchestrator | <= read (data resources) 2026-03-29 00:02:34.737985 | orchestrator | 2026-03-29 00:02:34.737997 | orchestrator | OpenTofu will perform the following actions: 2026-03-29 00:02:34.738071 | orchestrator | 2026-03-29 00:02:34.738086 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-29 00:02:34.738098 | orchestrator | # (config refers to values not yet known) 2026-03-29 00:02:34.738110 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-29 00:02:34.738121 | orchestrator | + checksum = (known after apply) 2026-03-29 00:02:34.738133 | orchestrator | + created_at = (known after apply) 2026-03-29 00:02:34.738144 | orchestrator | + file = (known after apply) 2026-03-29 00:02:34.738155 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.738200 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.738212 | orchestrator | + min_disk_gb = (known after apply) 2026-03-29 00:02:34.738223 | orchestrator | + min_ram_mb = (known after apply) 2026-03-29 00:02:34.738235 | orchestrator | + most_recent = true 2026-03-29 00:02:34.738246 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.738257 | orchestrator | + protected = (known after apply) 2026-03-29 00:02:34.738268 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.738283 | orchestrator | + schema = (known after apply) 2026-03-29 00:02:34.738294 | orchestrator | + size_bytes = (known after apply) 2026-03-29 00:02:34.738304 | orchestrator | + tags = (known after apply) 2026-03-29 00:02:34.738315 | orchestrator | + updated_at = (known after apply) 2026-03-29 00:02:34.738326 | orchestrator | } 2026-03-29 00:02:34.738337 | orchestrator | 2026-03-29 00:02:34.738348 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-29 00:02:34.738359 | orchestrator | # (config refers to values not yet known) 2026-03-29 00:02:34.738370 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-29 00:02:34.738381 | orchestrator | + checksum = (known after apply) 2026-03-29 00:02:34.738392 | orchestrator | + created_at = (known after apply) 2026-03-29 00:02:34.738403 | orchestrator | + file = (known after apply) 2026-03-29 00:02:34.738413 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.738424 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.738435 | orchestrator | + min_disk_gb = (known after apply) 2026-03-29 00:02:34.738471 | orchestrator | + min_ram_mb = (known after apply) 2026-03-29 00:02:34.738483 | orchestrator | + most_recent = true 2026-03-29 00:02:34.738494 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.738505 | orchestrator | + protected = (known after apply) 2026-03-29 00:02:34.738516 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.738527 | orchestrator | + schema = (known after apply) 2026-03-29 00:02:34.738537 | orchestrator | + size_bytes = (known after apply) 2026-03-29 00:02:34.738548 | orchestrator | + tags = (known after apply) 2026-03-29 00:02:34.738558 | orchestrator | + updated_at = (known after apply) 2026-03-29 00:02:34.738569 | orchestrator | } 2026-03-29 00:02:34.738580 | orchestrator | 2026-03-29 00:02:34.738591 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-29 00:02:34.738602 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-29 00:02:34.738613 | orchestrator | + content = (known after apply) 2026-03-29 00:02:34.738624 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.738635 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.738646 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.738657 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.738693 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.738704 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.738715 | orchestrator | + directory_permission = "0777" 2026-03-29 00:02:34.738725 | orchestrator | + file_permission = "0644" 2026-03-29 00:02:34.738736 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-29 00:02:34.738747 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.738758 | orchestrator | } 2026-03-29 00:02:34.738769 | orchestrator | 2026-03-29 00:02:34.738779 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-29 00:02:34.738790 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-29 00:02:34.738801 | orchestrator | + content = (known after apply) 2026-03-29 00:02:34.738812 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.738823 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.738833 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.738844 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.738855 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.738865 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.738877 | orchestrator | + directory_permission = "0777" 2026-03-29 00:02:34.738897 | orchestrator | + file_permission = "0644" 2026-03-29 00:02:34.738929 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-29 00:02:34.738949 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.738967 | orchestrator | } 2026-03-29 00:02:34.738985 | orchestrator | 2026-03-29 00:02:34.739025 | orchestrator | # local_file.inventory will be created 2026-03-29 00:02:34.739044 | orchestrator | + resource "local_file" "inventory" { 2026-03-29 00:02:34.739062 | orchestrator | + content = (known after apply) 2026-03-29 00:02:34.739081 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.739101 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.739120 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.739139 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.739160 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.739181 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.739202 | orchestrator | + directory_permission = "0777" 2026-03-29 00:02:34.739221 | orchestrator | + file_permission = "0644" 2026-03-29 00:02:34.739242 | orchestrator | + filename = "inventory.ci" 2026-03-29 00:02:34.739262 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.739281 | orchestrator | } 2026-03-29 00:02:34.739313 | orchestrator | 2026-03-29 00:02:34.739335 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-29 00:02:34.739355 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-29 00:02:34.739374 | orchestrator | + content = (sensitive value) 2026-03-29 00:02:34.739391 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-29 00:02:34.739411 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-29 00:02:34.739431 | orchestrator | + content_md5 = (known after apply) 2026-03-29 00:02:34.739528 | orchestrator | + content_sha1 = (known after apply) 2026-03-29 00:02:34.739551 | orchestrator | + content_sha256 = (known after apply) 2026-03-29 00:02:34.739571 | orchestrator | + content_sha512 = (known after apply) 2026-03-29 00:02:34.739592 | orchestrator | + directory_permission = "0700" 2026-03-29 00:02:34.739612 | orchestrator | + file_permission = "0600" 2026-03-29 00:02:34.739630 | orchestrator | + filename = ".id_rsa.ci" 2026-03-29 00:02:34.739650 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.739670 | orchestrator | } 2026-03-29 00:02:34.739690 | orchestrator | 2026-03-29 00:02:34.739710 | orchestrator | # null_resource.node_semaphore will be created 2026-03-29 00:02:34.739730 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-29 00:02:34.739750 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.739769 | orchestrator | } 2026-03-29 00:02:34.739789 | orchestrator | 2026-03-29 00:02:34.739809 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-29 00:02:34.739829 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-29 00:02:34.739848 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.739867 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.739884 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.739904 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.739921 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.739940 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-29 00:02:34.739957 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.739975 | orchestrator | + size = 80 2026-03-29 00:02:34.739993 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.740012 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.740029 | orchestrator | } 2026-03-29 00:02:34.740046 | orchestrator | 2026-03-29 00:02:34.740062 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-29 00:02:34.740077 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.740092 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.740110 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.740128 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.740158 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.740174 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.740191 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-29 00:02:34.740210 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.740227 | orchestrator | + size = 80 2026-03-29 00:02:34.740244 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.740261 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.740279 | orchestrator | } 2026-03-29 00:02:34.740297 | orchestrator | 2026-03-29 00:02:34.740316 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-29 00:02:34.740333 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.740351 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.740369 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.740386 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.740404 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.740422 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.740441 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-29 00:02:34.740487 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.740503 | orchestrator | + size = 80 2026-03-29 00:02:34.740518 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.740533 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.740543 | orchestrator | } 2026-03-29 00:02:34.740552 | orchestrator | 2026-03-29 00:02:34.740567 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-29 00:02:34.740583 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.740598 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.740613 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.740629 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.740645 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.740662 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.740678 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-29 00:02:34.740694 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.740710 | orchestrator | + size = 80 2026-03-29 00:02:34.740726 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.740743 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.740759 | orchestrator | } 2026-03-29 00:02:34.740776 | orchestrator | 2026-03-29 00:02:34.740787 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-29 00:02:34.740797 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.740806 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.740816 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.740826 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.740835 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.740845 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.740867 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-29 00:02:34.740883 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.740899 | orchestrator | + size = 80 2026-03-29 00:02:34.740916 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.740931 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.740946 | orchestrator | } 2026-03-29 00:02:34.740961 | orchestrator | 2026-03-29 00:02:34.740975 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-29 00:02:34.740989 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.741005 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.741021 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.741050 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.741077 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.741093 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.741109 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-29 00:02:34.741124 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.741140 | orchestrator | + size = 80 2026-03-29 00:02:34.741156 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.741172 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.741188 | orchestrator | } 2026-03-29 00:02:34.741204 | orchestrator | 2026-03-29 00:02:34.741219 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-29 00:02:34.741234 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-29 00:02:34.741249 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.741265 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.741280 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.741297 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.741313 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.741329 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-29 00:02:34.741346 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.741359 | orchestrator | + size = 80 2026-03-29 00:02:34.741375 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.741391 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.741406 | orchestrator | } 2026-03-29 00:02:34.741421 | orchestrator | 2026-03-29 00:02:34.741438 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-29 00:02:34.741482 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.741500 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.741517 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.741533 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.741548 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.741566 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-29 00:02:34.741582 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.741598 | orchestrator | + size = 20 2026-03-29 00:02:34.741613 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.741630 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.741646 | orchestrator | } 2026-03-29 00:02:34.741662 | orchestrator | 2026-03-29 00:02:34.741679 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-29 00:02:34.741696 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.741710 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.741726 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.741741 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.741755 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.741770 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-29 00:02:34.741785 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.741801 | orchestrator | + size = 20 2026-03-29 00:02:34.741817 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.741833 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.741850 | orchestrator | } 2026-03-29 00:02:34.741867 | orchestrator | 2026-03-29 00:02:34.741883 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-29 00:02:34.741898 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.741908 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.741918 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.741928 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.741938 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.741948 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-29 00:02:34.741957 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.741978 | orchestrator | + size = 20 2026-03-29 00:02:34.741988 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.741998 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.742008 | orchestrator | } 2026-03-29 00:02:34.743562 | orchestrator | 2026-03-29 00:02:34.743647 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-29 00:02:34.743660 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.743671 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.743681 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.743691 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.743700 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.743710 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-29 00:02:34.743720 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.743729 | orchestrator | + size = 20 2026-03-29 00:02:34.743739 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.743749 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.743759 | orchestrator | } 2026-03-29 00:02:34.743769 | orchestrator | 2026-03-29 00:02:34.743778 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-29 00:02:34.743788 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.743797 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.743807 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.743816 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.743826 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.743836 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-29 00:02:34.743846 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.743867 | orchestrator | + size = 20 2026-03-29 00:02:34.743877 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.743887 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.743897 | orchestrator | } 2026-03-29 00:02:34.743907 | orchestrator | 2026-03-29 00:02:34.743917 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-29 00:02:34.743926 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.743933 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.743941 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.743949 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.743957 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.743965 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-29 00:02:34.743987 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.743995 | orchestrator | + size = 20 2026-03-29 00:02:34.744004 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.744012 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.744020 | orchestrator | } 2026-03-29 00:02:34.744028 | orchestrator | 2026-03-29 00:02:34.744036 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-29 00:02:34.744044 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.744052 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.744059 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.744067 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.744075 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.744083 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-29 00:02:34.744091 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.744099 | orchestrator | + size = 20 2026-03-29 00:02:34.744107 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.744115 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.744122 | orchestrator | } 2026-03-29 00:02:34.744131 | orchestrator | 2026-03-29 00:02:34.744139 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-29 00:02:34.744147 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.744167 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.744175 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.744183 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.744191 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.744199 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-29 00:02:34.744207 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.744215 | orchestrator | + size = 20 2026-03-29 00:02:34.744223 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.744231 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.744239 | orchestrator | } 2026-03-29 00:02:34.744247 | orchestrator | 2026-03-29 00:02:34.744255 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-29 00:02:34.744263 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-29 00:02:34.744271 | orchestrator | + attachment = (known after apply) 2026-03-29 00:02:34.744279 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.744287 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.744295 | orchestrator | + metadata = (known after apply) 2026-03-29 00:02:34.744303 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-29 00:02:34.744311 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.744319 | orchestrator | + size = 20 2026-03-29 00:02:34.744327 | orchestrator | + volume_retype_policy = "never" 2026-03-29 00:02:34.744335 | orchestrator | + volume_type = "ssd" 2026-03-29 00:02:34.744343 | orchestrator | } 2026-03-29 00:02:34.744351 | orchestrator | 2026-03-29 00:02:34.744359 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-29 00:02:34.744367 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-29 00:02:34.744375 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.744383 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.744391 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.744399 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.744407 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.744415 | orchestrator | + config_drive = true 2026-03-29 00:02:34.744423 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.744431 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.744439 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-29 00:02:34.744470 | orchestrator | + force_delete = false 2026-03-29 00:02:34.744479 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.744486 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.744494 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.744502 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.744510 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.744518 | orchestrator | + name = "testbed-manager" 2026-03-29 00:02:34.744526 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.744534 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.744542 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.744550 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.744558 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.744566 | orchestrator | + user_data = (sensitive value) 2026-03-29 00:02:34.744574 | orchestrator | 2026-03-29 00:02:34.744582 | orchestrator | + block_device { 2026-03-29 00:02:34.744591 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.744599 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.744611 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.744620 | orchestrator | + multiattach = false 2026-03-29 00:02:34.744628 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.744640 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.744661 | orchestrator | } 2026-03-29 00:02:34.744675 | orchestrator | 2026-03-29 00:02:34.744688 | orchestrator | + network { 2026-03-29 00:02:34.744701 | orchestrator | + access_network = false 2026-03-29 00:02:34.744714 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.744725 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.744737 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.744750 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.744761 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.744774 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.744788 | orchestrator | } 2026-03-29 00:02:34.744802 | orchestrator | } 2026-03-29 00:02:34.744815 | orchestrator | 2026-03-29 00:02:34.744828 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-29 00:02:34.744843 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.744855 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.744868 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.744881 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.744895 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.744908 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.744923 | orchestrator | + config_drive = true 2026-03-29 00:02:34.744936 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.744961 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.744971 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.744979 | orchestrator | + force_delete = false 2026-03-29 00:02:34.744987 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.744995 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.745003 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.745011 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.745019 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.745027 | orchestrator | + name = "testbed-node-0" 2026-03-29 00:02:34.745035 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.745043 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.745051 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.745059 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.745067 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.745075 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.745082 | orchestrator | 2026-03-29 00:02:34.745090 | orchestrator | + block_device { 2026-03-29 00:02:34.745098 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.745106 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.745114 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.745122 | orchestrator | + multiattach = false 2026-03-29 00:02:34.745130 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.745138 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.745146 | orchestrator | } 2026-03-29 00:02:34.745154 | orchestrator | 2026-03-29 00:02:34.745162 | orchestrator | + network { 2026-03-29 00:02:34.745170 | orchestrator | + access_network = false 2026-03-29 00:02:34.745178 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.745186 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.745194 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.745202 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.745210 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.745218 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.745226 | orchestrator | } 2026-03-29 00:02:34.745234 | orchestrator | } 2026-03-29 00:02:34.745242 | orchestrator | 2026-03-29 00:02:34.745250 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-29 00:02:34.745258 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.745266 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.745282 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.745290 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.745297 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.745305 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.745313 | orchestrator | + config_drive = true 2026-03-29 00:02:34.745321 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.745329 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.745337 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.745345 | orchestrator | + force_delete = false 2026-03-29 00:02:34.745353 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.745360 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.745369 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.745376 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.745384 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.745392 | orchestrator | + name = "testbed-node-1" 2026-03-29 00:02:34.745400 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.745408 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.745416 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.745424 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.745432 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.745440 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.745468 | orchestrator | 2026-03-29 00:02:34.745476 | orchestrator | + block_device { 2026-03-29 00:02:34.745484 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.745492 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.745500 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.745508 | orchestrator | + multiattach = false 2026-03-29 00:02:34.745516 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.745524 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.745532 | orchestrator | } 2026-03-29 00:02:34.745540 | orchestrator | 2026-03-29 00:02:34.745547 | orchestrator | + network { 2026-03-29 00:02:34.745555 | orchestrator | + access_network = false 2026-03-29 00:02:34.745563 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.745571 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.745579 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.745587 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.745595 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.745602 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.745610 | orchestrator | } 2026-03-29 00:02:34.745618 | orchestrator | } 2026-03-29 00:02:34.745626 | orchestrator | 2026-03-29 00:02:34.745634 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-29 00:02:34.745642 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.745650 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.745658 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.745668 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.745676 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.745690 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.745698 | orchestrator | + config_drive = true 2026-03-29 00:02:34.745706 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.745714 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.745722 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.745730 | orchestrator | + force_delete = false 2026-03-29 00:02:34.745737 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.745745 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.745753 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.745766 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.745774 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.745782 | orchestrator | + name = "testbed-node-2" 2026-03-29 00:02:34.745790 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.745803 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.745811 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.745820 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.745827 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.745835 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.745844 | orchestrator | 2026-03-29 00:02:34.745851 | orchestrator | + block_device { 2026-03-29 00:02:34.745859 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.745867 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.745875 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.745883 | orchestrator | + multiattach = false 2026-03-29 00:02:34.745891 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.745899 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.745907 | orchestrator | } 2026-03-29 00:02:34.745915 | orchestrator | 2026-03-29 00:02:34.745923 | orchestrator | + network { 2026-03-29 00:02:34.745930 | orchestrator | + access_network = false 2026-03-29 00:02:34.745938 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.745946 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.745954 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.745962 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.745970 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.745977 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.745985 | orchestrator | } 2026-03-29 00:02:34.745993 | orchestrator | } 2026-03-29 00:02:34.746001 | orchestrator | 2026-03-29 00:02:34.746009 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-29 00:02:34.747277 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.747299 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.747306 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.747313 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.747319 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.747326 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.747333 | orchestrator | + config_drive = true 2026-03-29 00:02:34.747340 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.747346 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.747353 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.747360 | orchestrator | + force_delete = false 2026-03-29 00:02:34.747367 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.747374 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.747380 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.747387 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.747395 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.747405 | orchestrator | + name = "testbed-node-3" 2026-03-29 00:02:34.747416 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.747427 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.747437 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.747469 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.747480 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.747491 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.747502 | orchestrator | 2026-03-29 00:02:34.747511 | orchestrator | + block_device { 2026-03-29 00:02:34.747532 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.747541 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.747551 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.747574 | orchestrator | + multiattach = false 2026-03-29 00:02:34.747584 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.747593 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.747603 | orchestrator | } 2026-03-29 00:02:34.747612 | orchestrator | 2026-03-29 00:02:34.747623 | orchestrator | + network { 2026-03-29 00:02:34.747633 | orchestrator | + access_network = false 2026-03-29 00:02:34.747644 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.747655 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.747665 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.747676 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.747687 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.747697 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.747707 | orchestrator | } 2026-03-29 00:02:34.747717 | orchestrator | } 2026-03-29 00:02:34.747728 | orchestrator | 2026-03-29 00:02:34.747740 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-29 00:02:34.747751 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.747761 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.747772 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.747782 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.747793 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.747803 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.747814 | orchestrator | + config_drive = true 2026-03-29 00:02:34.747825 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.747837 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.747847 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.747856 | orchestrator | + force_delete = false 2026-03-29 00:02:34.747866 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.747877 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.747888 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.747899 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.747908 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.747918 | orchestrator | + name = "testbed-node-4" 2026-03-29 00:02:34.747928 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.747939 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.747949 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.747958 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.747967 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.747977 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.747986 | orchestrator | 2026-03-29 00:02:34.747997 | orchestrator | + block_device { 2026-03-29 00:02:34.748008 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.748019 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.748029 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.748039 | orchestrator | + multiattach = false 2026-03-29 00:02:34.748064 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.748076 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.748086 | orchestrator | } 2026-03-29 00:02:34.748096 | orchestrator | 2026-03-29 00:02:34.748107 | orchestrator | + network { 2026-03-29 00:02:34.748118 | orchestrator | + access_network = false 2026-03-29 00:02:34.748129 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.748139 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.748149 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.748159 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.748169 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.748179 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.748190 | orchestrator | } 2026-03-29 00:02:34.748200 | orchestrator | } 2026-03-29 00:02:34.748228 | orchestrator | 2026-03-29 00:02:34.748239 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-29 00:02:34.748250 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-29 00:02:34.748261 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-29 00:02:34.748272 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-29 00:02:34.748282 | orchestrator | + all_metadata = (known after apply) 2026-03-29 00:02:34.748292 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.748302 | orchestrator | + availability_zone = "nova" 2026-03-29 00:02:34.748313 | orchestrator | + config_drive = true 2026-03-29 00:02:34.748324 | orchestrator | + created = (known after apply) 2026-03-29 00:02:34.748335 | orchestrator | + flavor_id = (known after apply) 2026-03-29 00:02:34.748345 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-29 00:02:34.748355 | orchestrator | + force_delete = false 2026-03-29 00:02:34.748374 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-29 00:02:34.748385 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.748396 | orchestrator | + image_id = (known after apply) 2026-03-29 00:02:34.748406 | orchestrator | + image_name = (known after apply) 2026-03-29 00:02:34.748416 | orchestrator | + key_pair = "testbed" 2026-03-29 00:02:34.748426 | orchestrator | + name = "testbed-node-5" 2026-03-29 00:02:34.748438 | orchestrator | + power_state = "active" 2026-03-29 00:02:34.748511 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.748523 | orchestrator | + security_groups = (known after apply) 2026-03-29 00:02:34.748534 | orchestrator | + stop_before_destroy = false 2026-03-29 00:02:34.748544 | orchestrator | + updated = (known after apply) 2026-03-29 00:02:34.748555 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-29 00:02:34.748567 | orchestrator | 2026-03-29 00:02:34.748577 | orchestrator | + block_device { 2026-03-29 00:02:34.748588 | orchestrator | + boot_index = 0 2026-03-29 00:02:34.748599 | orchestrator | + delete_on_termination = false 2026-03-29 00:02:34.748609 | orchestrator | + destination_type = "volume" 2026-03-29 00:02:34.748620 | orchestrator | + multiattach = false 2026-03-29 00:02:34.748630 | orchestrator | + source_type = "volume" 2026-03-29 00:02:34.748641 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.748651 | orchestrator | } 2026-03-29 00:02:34.748662 | orchestrator | 2026-03-29 00:02:34.748673 | orchestrator | + network { 2026-03-29 00:02:34.748683 | orchestrator | + access_network = false 2026-03-29 00:02:34.748694 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-29 00:02:34.748704 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-29 00:02:34.748715 | orchestrator | + mac = (known after apply) 2026-03-29 00:02:34.748724 | orchestrator | + name = (known after apply) 2026-03-29 00:02:34.748734 | orchestrator | + port = (known after apply) 2026-03-29 00:02:34.748744 | orchestrator | + uuid = (known after apply) 2026-03-29 00:02:34.748754 | orchestrator | } 2026-03-29 00:02:34.748763 | orchestrator | } 2026-03-29 00:02:34.748772 | orchestrator | 2026-03-29 00:02:34.748781 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-29 00:02:34.748792 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-29 00:02:34.748802 | orchestrator | + fingerprint = (known after apply) 2026-03-29 00:02:34.748811 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.748822 | orchestrator | + name = "testbed" 2026-03-29 00:02:34.748831 | orchestrator | + private_key = (sensitive value) 2026-03-29 00:02:34.748841 | orchestrator | + public_key = (known after apply) 2026-03-29 00:02:34.748850 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.748859 | orchestrator | + user_id = (known after apply) 2026-03-29 00:02:34.748868 | orchestrator | } 2026-03-29 00:02:34.748878 | orchestrator | 2026-03-29 00:02:34.748888 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-29 00:02:34.748898 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.748919 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.748929 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.748939 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.748949 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.748958 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.748967 | orchestrator | } 2026-03-29 00:02:34.748977 | orchestrator | 2026-03-29 00:02:34.748987 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-29 00:02:34.748997 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749006 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749015 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749024 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749033 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749042 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749052 | orchestrator | } 2026-03-29 00:02:34.749060 | orchestrator | 2026-03-29 00:02:34.749070 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-29 00:02:34.749080 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749090 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749100 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749109 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749119 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749128 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749138 | orchestrator | } 2026-03-29 00:02:34.749147 | orchestrator | 2026-03-29 00:02:34.749157 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-29 00:02:34.749167 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749176 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749185 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749211 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749224 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749234 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749243 | orchestrator | } 2026-03-29 00:02:34.749253 | orchestrator | 2026-03-29 00:02:34.749262 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-29 00:02:34.749272 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749282 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749292 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749303 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749322 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749333 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749343 | orchestrator | } 2026-03-29 00:02:34.749352 | orchestrator | 2026-03-29 00:02:34.749361 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-29 00:02:34.749371 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749380 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749389 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749399 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749408 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749416 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749425 | orchestrator | } 2026-03-29 00:02:34.749434 | orchestrator | 2026-03-29 00:02:34.749466 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-29 00:02:34.749476 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749486 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749496 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749505 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749514 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749535 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749545 | orchestrator | } 2026-03-29 00:02:34.749555 | orchestrator | 2026-03-29 00:02:34.749565 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-29 00:02:34.749574 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749584 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749595 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749650 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749662 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749673 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749684 | orchestrator | } 2026-03-29 00:02:34.749695 | orchestrator | 2026-03-29 00:02:34.749705 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-29 00:02:34.749716 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-29 00:02:34.749727 | orchestrator | + device = (known after apply) 2026-03-29 00:02:34.749737 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.749747 | orchestrator | + instance_id = (known after apply) 2026-03-29 00:02:34.749758 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.749769 | orchestrator | + volume_id = (known after apply) 2026-03-29 00:02:34.749779 | orchestrator | } 2026-03-29 00:02:34.749790 | orchestrator | 2026-03-29 00:02:34.749901 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-29 00:02:34.749912 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-29 00:02:34.749918 | orchestrator | + fixed_ip = (known after apply) 2026-03-29 00:02:34.749924 | orchestrator | + floating_ip = (known after apply) 2026-03-29 00:02:34.749931 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.750037 | orchestrator | + port_id = (known after apply) 2026-03-29 00:02:34.755375 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.755410 | orchestrator | } 2026-03-29 00:02:34.755417 | orchestrator | 2026-03-29 00:02:34.755423 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-29 00:02:34.755430 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-29 00:02:34.755435 | orchestrator | + address = (known after apply) 2026-03-29 00:02:34.755441 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.755564 | orchestrator | + dns_domain = (known after apply) 2026-03-29 00:02:34.755573 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.755579 | orchestrator | + fixed_ip = (known after apply) 2026-03-29 00:02:34.755585 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.755590 | orchestrator | + pool = "public" 2026-03-29 00:02:34.755597 | orchestrator | + port_id = (known after apply) 2026-03-29 00:02:34.755603 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.755608 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.755614 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.755619 | orchestrator | } 2026-03-29 00:02:34.755625 | orchestrator | 2026-03-29 00:02:34.755631 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-29 00:02:34.755636 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-29 00:02:34.755642 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.755647 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.755651 | orchestrator | + availability_zone_hints = [ 2026-03-29 00:02:34.755656 | orchestrator | + "nova", 2026-03-29 00:02:34.755662 | orchestrator | ] 2026-03-29 00:02:34.755667 | orchestrator | + dns_domain = (known after apply) 2026-03-29 00:02:34.755672 | orchestrator | + external = (known after apply) 2026-03-29 00:02:34.755676 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.755681 | orchestrator | + mtu = (known after apply) 2026-03-29 00:02:34.755686 | orchestrator | + name = "net-testbed-management" 2026-03-29 00:02:34.755691 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.755709 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.755715 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.755720 | orchestrator | + shared = (known after apply) 2026-03-29 00:02:34.755725 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.755730 | orchestrator | + transparent_vlan = (known after apply) 2026-03-29 00:02:34.755735 | orchestrator | 2026-03-29 00:02:34.755740 | orchestrator | + segments (known after apply) 2026-03-29 00:02:34.755745 | orchestrator | } 2026-03-29 00:02:34.755749 | orchestrator | 2026-03-29 00:02:34.755755 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-29 00:02:34.755759 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-29 00:02:34.755778 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.755783 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.755788 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.755800 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.755805 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.755810 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.755815 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.755820 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.755825 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.755830 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.755834 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.755839 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.755844 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.755849 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.755854 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.755859 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.755864 | orchestrator | 2026-03-29 00:02:34.755869 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.755873 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.755878 | orchestrator | } 2026-03-29 00:02:34.755883 | orchestrator | 2026-03-29 00:02:34.755888 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.755894 | orchestrator | 2026-03-29 00:02:34.755899 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.755904 | orchestrator | + ip_address = "192.168.16.5" 2026-03-29 00:02:34.755909 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.755914 | orchestrator | } 2026-03-29 00:02:34.755919 | orchestrator | } 2026-03-29 00:02:34.755923 | orchestrator | 2026-03-29 00:02:34.755928 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-29 00:02:34.755933 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.755938 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.755943 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.755948 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.755953 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.755958 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.755963 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.755967 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.755981 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.755986 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.755991 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.755996 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.756001 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.756006 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.756011 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.756019 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.756024 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.756029 | orchestrator | 2026-03-29 00:02:34.756034 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756039 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.756044 | orchestrator | } 2026-03-29 00:02:34.756049 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756053 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.756058 | orchestrator | } 2026-03-29 00:02:34.756063 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756068 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.756073 | orchestrator | } 2026-03-29 00:02:34.756077 | orchestrator | 2026-03-29 00:02:34.756082 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.756087 | orchestrator | 2026-03-29 00:02:34.756092 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.756097 | orchestrator | + ip_address = "192.168.16.10" 2026-03-29 00:02:34.756121 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.756126 | orchestrator | } 2026-03-29 00:02:34.756131 | orchestrator | } 2026-03-29 00:02:34.756135 | orchestrator | 2026-03-29 00:02:34.756140 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-29 00:02:34.756145 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.756150 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.756155 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.756160 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.756164 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.756169 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.756174 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.756179 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.756184 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.756188 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.756193 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.756198 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.756203 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.756207 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.756212 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.756217 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.756222 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.756226 | orchestrator | 2026-03-29 00:02:34.756231 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756236 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.756241 | orchestrator | } 2026-03-29 00:02:34.756246 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756250 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.756255 | orchestrator | } 2026-03-29 00:02:34.756260 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756265 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.756270 | orchestrator | } 2026-03-29 00:02:34.756274 | orchestrator | 2026-03-29 00:02:34.756279 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.756284 | orchestrator | 2026-03-29 00:02:34.756289 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.756294 | orchestrator | + ip_address = "192.168.16.11" 2026-03-29 00:02:34.756304 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.756309 | orchestrator | } 2026-03-29 00:02:34.756313 | orchestrator | } 2026-03-29 00:02:34.756318 | orchestrator | 2026-03-29 00:02:34.756323 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-29 00:02:34.756328 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.756333 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.756342 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.756347 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.756352 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.756361 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.756366 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.756371 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.756375 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.756383 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.756388 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.756393 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.756398 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.756403 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.756408 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.756413 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.756417 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.756422 | orchestrator | 2026-03-29 00:02:34.756427 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756432 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.756437 | orchestrator | } 2026-03-29 00:02:34.756441 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756464 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.756469 | orchestrator | } 2026-03-29 00:02:34.756474 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756479 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.756484 | orchestrator | } 2026-03-29 00:02:34.756489 | orchestrator | 2026-03-29 00:02:34.756494 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.756498 | orchestrator | 2026-03-29 00:02:34.756503 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.756508 | orchestrator | + ip_address = "192.168.16.12" 2026-03-29 00:02:34.756513 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.756518 | orchestrator | } 2026-03-29 00:02:34.756523 | orchestrator | } 2026-03-29 00:02:34.756527 | orchestrator | 2026-03-29 00:02:34.756532 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-29 00:02:34.756537 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.756542 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.756547 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.756552 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.756557 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.756562 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.756567 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.756571 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.756576 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.756581 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.756586 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.756590 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.756595 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.756600 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.756605 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.756610 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.756615 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.756619 | orchestrator | 2026-03-29 00:02:34.756624 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756629 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.756634 | orchestrator | } 2026-03-29 00:02:34.756639 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756644 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.756649 | orchestrator | } 2026-03-29 00:02:34.756653 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756658 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.756663 | orchestrator | } 2026-03-29 00:02:34.756668 | orchestrator | 2026-03-29 00:02:34.756677 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.756681 | orchestrator | 2026-03-29 00:02:34.756686 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.756691 | orchestrator | + ip_address = "192.168.16.13" 2026-03-29 00:02:34.756696 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.756701 | orchestrator | } 2026-03-29 00:02:34.756706 | orchestrator | } 2026-03-29 00:02:34.756710 | orchestrator | 2026-03-29 00:02:34.756715 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-29 00:02:34.756720 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.756725 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.756730 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.756735 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.756740 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.756744 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.756749 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.756754 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.756759 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.756764 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.756768 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.756773 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.756778 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.756783 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.756788 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.756792 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.756797 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.756804 | orchestrator | 2026-03-29 00:02:34.756809 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756814 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.756819 | orchestrator | } 2026-03-29 00:02:34.756824 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756829 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.756834 | orchestrator | } 2026-03-29 00:02:34.756838 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.756843 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.756848 | orchestrator | } 2026-03-29 00:02:34.756853 | orchestrator | 2026-03-29 00:02:34.756858 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.756862 | orchestrator | 2026-03-29 00:02:34.756867 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.756872 | orchestrator | + ip_address = "192.168.16.14" 2026-03-29 00:02:34.756877 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.756882 | orchestrator | } 2026-03-29 00:02:34.756886 | orchestrator | } 2026-03-29 00:02:34.756891 | orchestrator | 2026-03-29 00:02:34.756896 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-29 00:02:34.756905 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-29 00:02:34.756910 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.756914 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-29 00:02:34.756919 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-29 00:02:34.756924 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.756929 | orchestrator | + device_id = (known after apply) 2026-03-29 00:02:34.756934 | orchestrator | + device_owner = (known after apply) 2026-03-29 00:02:34.756939 | orchestrator | + dns_assignment = (known after apply) 2026-03-29 00:02:34.756943 | orchestrator | + dns_name = (known after apply) 2026-03-29 00:02:34.756948 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.756953 | orchestrator | + mac_address = (known after apply) 2026-03-29 00:02:34.756958 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.756963 | orchestrator | + port_security_enabled = (known after apply) 2026-03-29 00:02:34.756967 | orchestrator | + qos_policy_id = (known after apply) 2026-03-29 00:02:34.756976 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.756981 | orchestrator | + security_group_ids = (known after apply) 2026-03-29 00:02:34.756986 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.756991 | orchestrator | 2026-03-29 00:02:34.756996 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.757001 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-29 00:02:34.757006 | orchestrator | } 2026-03-29 00:02:34.757010 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.757015 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-29 00:02:34.757020 | orchestrator | } 2026-03-29 00:02:34.757025 | orchestrator | + allowed_address_pairs { 2026-03-29 00:02:34.757030 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-29 00:02:34.757035 | orchestrator | } 2026-03-29 00:02:34.757039 | orchestrator | 2026-03-29 00:02:34.757047 | orchestrator | + binding (known after apply) 2026-03-29 00:02:34.757052 | orchestrator | 2026-03-29 00:02:34.757057 | orchestrator | + fixed_ip { 2026-03-29 00:02:34.757062 | orchestrator | + ip_address = "192.168.16.15" 2026-03-29 00:02:34.757067 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.757072 | orchestrator | } 2026-03-29 00:02:34.757076 | orchestrator | } 2026-03-29 00:02:34.757081 | orchestrator | 2026-03-29 00:02:34.757086 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-29 00:02:34.757091 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-29 00:02:34.757096 | orchestrator | + force_destroy = false 2026-03-29 00:02:34.757101 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757106 | orchestrator | + port_id = (known after apply) 2026-03-29 00:02:34.757111 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757115 | orchestrator | + router_id = (known after apply) 2026-03-29 00:02:34.757120 | orchestrator | + subnet_id = (known after apply) 2026-03-29 00:02:34.757125 | orchestrator | } 2026-03-29 00:02:34.757130 | orchestrator | 2026-03-29 00:02:34.757135 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-29 00:02:34.757139 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-29 00:02:34.757144 | orchestrator | + admin_state_up = (known after apply) 2026-03-29 00:02:34.757149 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.757154 | orchestrator | + availability_zone_hints = [ 2026-03-29 00:02:34.757159 | orchestrator | + "nova", 2026-03-29 00:02:34.757164 | orchestrator | ] 2026-03-29 00:02:34.757169 | orchestrator | + distributed = (known after apply) 2026-03-29 00:02:34.757173 | orchestrator | + enable_snat = (known after apply) 2026-03-29 00:02:34.757178 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-29 00:02:34.757183 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-29 00:02:34.757188 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757193 | orchestrator | + name = "testbed" 2026-03-29 00:02:34.757198 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757203 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757207 | orchestrator | 2026-03-29 00:02:34.757212 | orchestrator | + external_fixed_ip (known after apply) 2026-03-29 00:02:34.757217 | orchestrator | } 2026-03-29 00:02:34.757222 | orchestrator | 2026-03-29 00:02:34.757227 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-29 00:02:34.757232 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-29 00:02:34.757237 | orchestrator | + description = "ssh" 2026-03-29 00:02:34.757242 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757247 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757251 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757256 | orchestrator | + port_range_max = 22 2026-03-29 00:02:34.757261 | orchestrator | + port_range_min = 22 2026-03-29 00:02:34.757266 | orchestrator | + protocol = "tcp" 2026-03-29 00:02:34.757271 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757283 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757287 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757292 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.757297 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757302 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757307 | orchestrator | } 2026-03-29 00:02:34.757312 | orchestrator | 2026-03-29 00:02:34.757316 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-29 00:02:34.757321 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-29 00:02:34.757326 | orchestrator | + description = "wireguard" 2026-03-29 00:02:34.757331 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757336 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757340 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757345 | orchestrator | + port_range_max = 51820 2026-03-29 00:02:34.757350 | orchestrator | + port_range_min = 51820 2026-03-29 00:02:34.757355 | orchestrator | + protocol = "udp" 2026-03-29 00:02:34.757360 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757364 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757369 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757374 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.757379 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757387 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757392 | orchestrator | } 2026-03-29 00:02:34.757397 | orchestrator | 2026-03-29 00:02:34.757402 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-29 00:02:34.757407 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-29 00:02:34.757412 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757417 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757421 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757426 | orchestrator | + protocol = "tcp" 2026-03-29 00:02:34.757431 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757436 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757440 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757461 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-29 00:02:34.757466 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757471 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757476 | orchestrator | } 2026-03-29 00:02:34.757481 | orchestrator | 2026-03-29 00:02:34.757485 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-29 00:02:34.757490 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-29 00:02:34.757495 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757500 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757505 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757510 | orchestrator | + protocol = "udp" 2026-03-29 00:02:34.757515 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757520 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757524 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757529 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-29 00:02:34.757534 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757539 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757543 | orchestrator | } 2026-03-29 00:02:34.757548 | orchestrator | 2026-03-29 00:02:34.757553 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-29 00:02:34.757563 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-29 00:02:34.757568 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757573 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757577 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757582 | orchestrator | + protocol = "icmp" 2026-03-29 00:02:34.757587 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757592 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757597 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757601 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.757606 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757611 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757616 | orchestrator | } 2026-03-29 00:02:34.757621 | orchestrator | 2026-03-29 00:02:34.757625 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-29 00:02:34.757630 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-29 00:02:34.757635 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757640 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757645 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757650 | orchestrator | + protocol = "tcp" 2026-03-29 00:02:34.757655 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757659 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757667 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757672 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.757677 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757682 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757687 | orchestrator | } 2026-03-29 00:02:34.757692 | orchestrator | 2026-03-29 00:02:34.757697 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-29 00:02:34.757702 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-29 00:02:34.757707 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757711 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757716 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757721 | orchestrator | + protocol = "udp" 2026-03-29 00:02:34.757726 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757731 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757735 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757740 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.757745 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757750 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757755 | orchestrator | } 2026-03-29 00:02:34.757760 | orchestrator | 2026-03-29 00:02:34.757764 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-29 00:02:34.757769 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-29 00:02:34.757774 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757782 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757787 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757792 | orchestrator | + protocol = "icmp" 2026-03-29 00:02:34.757796 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757801 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757806 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757811 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.757816 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757821 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757829 | orchestrator | } 2026-03-29 00:02:34.757834 | orchestrator | 2026-03-29 00:02:34.757842 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-29 00:02:34.757848 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-29 00:02:34.757852 | orchestrator | + description = "vrrp" 2026-03-29 00:02:34.757857 | orchestrator | + direction = "ingress" 2026-03-29 00:02:34.757862 | orchestrator | + ethertype = "IPv4" 2026-03-29 00:02:34.757867 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757872 | orchestrator | + protocol = "112" 2026-03-29 00:02:34.757876 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757881 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-29 00:02:34.757886 | orchestrator | + remote_group_id = (known after apply) 2026-03-29 00:02:34.757891 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-29 00:02:34.757896 | orchestrator | + security_group_id = (known after apply) 2026-03-29 00:02:34.757901 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757906 | orchestrator | } 2026-03-29 00:02:34.757910 | orchestrator | 2026-03-29 00:02:34.757915 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-29 00:02:34.757920 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-29 00:02:34.757925 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.757930 | orchestrator | + description = "management security group" 2026-03-29 00:02:34.757935 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757940 | orchestrator | + name = "testbed-management" 2026-03-29 00:02:34.757945 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.757949 | orchestrator | + stateful = (known after apply) 2026-03-29 00:02:34.757954 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.757959 | orchestrator | } 2026-03-29 00:02:34.757964 | orchestrator | 2026-03-29 00:02:34.757969 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-29 00:02:34.757974 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-29 00:02:34.757979 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.757983 | orchestrator | + description = "node security group" 2026-03-29 00:02:34.757988 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.757993 | orchestrator | + name = "testbed-node" 2026-03-29 00:02:34.757998 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.758003 | orchestrator | + stateful = (known after apply) 2026-03-29 00:02:34.758007 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.758052 | orchestrator | } 2026-03-29 00:02:34.758059 | orchestrator | 2026-03-29 00:02:34.758064 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-29 00:02:34.758069 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-29 00:02:34.758074 | orchestrator | + all_tags = (known after apply) 2026-03-29 00:02:34.758079 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-29 00:02:34.758084 | orchestrator | + dns_nameservers = [ 2026-03-29 00:02:34.758089 | orchestrator | + "8.8.8.8", 2026-03-29 00:02:34.758094 | orchestrator | + "9.9.9.9", 2026-03-29 00:02:34.758099 | orchestrator | ] 2026-03-29 00:02:34.758104 | orchestrator | + enable_dhcp = true 2026-03-29 00:02:34.758109 | orchestrator | + gateway_ip = (known after apply) 2026-03-29 00:02:34.758114 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.758118 | orchestrator | + ip_version = 4 2026-03-29 00:02:34.758123 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-29 00:02:34.758128 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-29 00:02:34.758133 | orchestrator | + name = "subnet-testbed-management" 2026-03-29 00:02:34.758138 | orchestrator | + network_id = (known after apply) 2026-03-29 00:02:34.758143 | orchestrator | + no_gateway = false 2026-03-29 00:02:34.758148 | orchestrator | + region = (known after apply) 2026-03-29 00:02:34.758153 | orchestrator | + service_types = (known after apply) 2026-03-29 00:02:34.758161 | orchestrator | + tenant_id = (known after apply) 2026-03-29 00:02:34.758166 | orchestrator | 2026-03-29 00:02:34.758171 | orchestrator | + allocation_pool { 2026-03-29 00:02:34.758176 | orchestrator | + end = "192.168.31.250" 2026-03-29 00:02:34.758181 | orchestrator | + start = "192.168.31.200" 2026-03-29 00:02:34.758186 | orchestrator | } 2026-03-29 00:02:34.758191 | orchestrator | } 2026-03-29 00:02:34.758195 | orchestrator | 2026-03-29 00:02:34.758200 | orchestrator | # terraform_data.image will be created 2026-03-29 00:02:34.758205 | orchestrator | + resource "terraform_data" "image" { 2026-03-29 00:02:34.762399 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.762414 | orchestrator | + input = "Ubuntu 24.04" 2026-03-29 00:02:34.762419 | orchestrator | + output = (known after apply) 2026-03-29 00:02:34.762424 | orchestrator | } 2026-03-29 00:02:34.762430 | orchestrator | 2026-03-29 00:02:34.762435 | orchestrator | # terraform_data.image_node will be created 2026-03-29 00:02:34.762440 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-29 00:02:34.762458 | orchestrator | + id = (known after apply) 2026-03-29 00:02:34.762463 | orchestrator | + input = "Ubuntu 24.04" 2026-03-29 00:02:34.762469 | orchestrator | + output = (known after apply) 2026-03-29 00:02:34.762474 | orchestrator | } 2026-03-29 00:02:34.762478 | orchestrator | 2026-03-29 00:02:34.762483 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-29 00:02:34.762488 | orchestrator | 2026-03-29 00:02:34.762493 | orchestrator | Changes to Outputs: 2026-03-29 00:02:34.762498 | orchestrator | + manager_address = (sensitive value) 2026-03-29 00:02:34.762503 | orchestrator | + private_key = (sensitive value) 2026-03-29 00:02:34.941759 | orchestrator | terraform_data.image: Creating... 2026-03-29 00:02:34.942198 | orchestrator | terraform_data.image: Creation complete after 0s [id=328b49e0-fb3b-19c6-92f2-5106c0d7bff4] 2026-03-29 00:02:35.048468 | orchestrator | terraform_data.image_node: Creating... 2026-03-29 00:02:35.048518 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=9d3840ce-7d90-ca23-b65d-07dd92a85e5b] 2026-03-29 00:02:35.070028 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-29 00:02:35.074097 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-29 00:02:35.097196 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-29 00:02:35.099025 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-29 00:02:35.099553 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-29 00:02:35.100703 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-29 00:02:35.100732 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-29 00:02:35.103853 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-29 00:02:35.104279 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-29 00:02:35.114670 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-29 00:02:35.577803 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-29 00:02:35.582797 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-29 00:02:35.583158 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-29 00:02:35.586215 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-29 00:02:35.680587 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-29 00:02:35.685221 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-29 00:02:37.023592 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=6fe36f0b-fee0-4e04-84e9-a87bec22108c] 2026-03-29 00:02:37.031784 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-29 00:02:38.870560 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=f2ea4a06-d51a-493c-82b1-bac83ac89551] 2026-03-29 00:02:38.874492 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=3da40c29-5f2c-4690-a312-2dad3a63ee41] 2026-03-29 00:02:38.874834 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-29 00:02:38.900275 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=3634f6e0-2fc1-46dc-9b61-f009b476dcdf] 2026-03-29 00:02:38.914776 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-29 00:02:38.925304 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=071e4fdb-2f21-4724-b6d5-ab202ed81b2c] 2026-03-29 00:02:38.925387 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=eba3fb10-bf4f-42e8-8781-9c26ea140c89] 2026-03-29 00:02:38.927540 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c] 2026-03-29 00:02:38.931217 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-29 00:02:38.932238 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-29 00:02:38.956482 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=637791c8-8ac8-49ce-9448-9b664b68bb9c] 2026-03-29 00:02:38.962778 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-29 00:02:39.012336 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=45f66f48-5092-4630-bbd4-e7a21fea6d53] 2026-03-29 00:02:39.020791 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-29 00:02:39.054162 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-29 00:02:39.057226 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-29 00:02:39.059328 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=deb2b35e4b1679e1a66bda3fd4023ec8dcf56040] 2026-03-29 00:02:39.063066 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=0a77c1733f314f69f868fe0f8a697f259cf5fcf3] 2026-03-29 00:02:39.063795 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-29 00:02:39.222843 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=9c3edf6b-7c95-4460-bef4-a1ae8fb1460d] 2026-03-29 00:02:40.383363 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=a8f011c6-253f-4d2e-a794-f27e3004ce25] 2026-03-29 00:02:40.391173 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=60721b90-5899-4b8f-a850-b8af33ad21f9] 2026-03-29 00:02:40.400414 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-29 00:02:42.334576 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=ad319918-57bd-4a4f-a2a3-8dffba7c3c21] 2026-03-29 00:02:42.380383 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=399884be-143f-480f-85e8-b5f7de120e28] 2026-03-29 00:02:42.402003 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=bf152443-0fe9-4f46-a676-7ec0334a56b1] 2026-03-29 00:02:43.629853 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=318cc609-7e64-4013-b7ec-e8927e97946a] 2026-03-29 00:02:43.629893 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=2ada047f-8836-45d8-9369-df8d0b6945b8] 2026-03-29 00:02:43.629905 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=953ed705-bbcf-48ea-89de-0fb88bd35712] 2026-03-29 00:02:45.811915 | orchestrator | openstack_networking_router_v2.router: Creation complete after 6s [id=e00e5228-08dc-4c2f-89ae-5b8734021efe] 2026-03-29 00:02:45.818096 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-29 00:02:45.819320 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-29 00:02:45.821117 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-29 00:02:46.138992 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=e5071c5a-cb52-46b7-b504-e9ba699514f3] 2026-03-29 00:02:46.149024 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-29 00:02:46.149094 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-29 00:02:46.151485 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-29 00:02:46.155789 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-29 00:02:46.164526 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-29 00:02:46.165127 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-29 00:02:46.166614 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-29 00:02:46.169675 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-29 00:02:46.365635 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a89054b0-83c3-469b-bdc9-46dba2515f98] 2026-03-29 00:02:46.377393 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-29 00:02:46.598560 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=bc810443-d8d3-4373-bbc5-a2ee78e37fff] 2026-03-29 00:02:46.605759 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-29 00:02:47.150257 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=30d89671-fdfb-4db4-95c4-dbdcf7a93901] 2026-03-29 00:02:47.155074 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-29 00:02:47.516429 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=84cf7f1e-052b-4e6a-bd54-1b61b7b52d91] 2026-03-29 00:02:47.520978 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-29 00:02:47.528438 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=660948fb-ab79-4bef-a155-e6072c5eb8fe] 2026-03-29 00:02:47.533791 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-29 00:02:47.675136 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=e86a04eb-8a94-4712-b0f1-980266bf378e] 2026-03-29 00:02:47.686101 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-29 00:02:47.732274 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=b3f1e0ba-e66d-4dc6-91b2-fbbf308e540a] 2026-03-29 00:02:47.738702 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-29 00:02:47.784830 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=bdb6ba17-471c-44fb-a72a-1eb8573cffa1] 2026-03-29 00:02:47.794943 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-29 00:02:48.104931 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=9978e08b-77b6-4705-b5c8-af0241794e75] 2026-03-29 00:02:48.222785 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=833ddbb8-d67a-4f1c-9ecc-807d50af6fa5] 2026-03-29 00:02:48.385940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=e36dc698-e049-4f9a-a73d-21661cc2ca50] 2026-03-29 00:02:48.612997 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=ff4dcaeb-cfaf-4fa0-9e91-771bb2bf646a] 2026-03-29 00:02:48.735836 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 3s [id=1ce69c69-92f2-4e6c-8ef5-272475e2bb23] 2026-03-29 00:02:48.826127 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=227abbe4-5e61-46b5-8ae8-cb075b264f0d] 2026-03-29 00:02:48.839838 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=e676f74a-63a3-423d-91dc-bd75f9e3b7f3] 2026-03-29 00:02:48.897099 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=02c17213-36bb-4fc2-9c9e-552469ef5cc3] 2026-03-29 00:02:49.197178 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=80c6499e-ea40-4e89-98d7-1f80816d65d9] 2026-03-29 00:02:49.817110 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=13c47613-0ba1-4118-9804-5e563ca18ebd] 2026-03-29 00:02:49.841720 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-29 00:02:49.849792 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-29 00:02:49.850079 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-29 00:02:49.850489 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-29 00:02:49.866706 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-29 00:02:49.870100 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-29 00:02:49.875042 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-29 00:02:52.095222 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=04ef75af-f6c3-4dc1-b210-baa4a27c975c] 2026-03-29 00:02:52.111418 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-29 00:02:52.111735 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-29 00:02:52.114095 | orchestrator | local_file.inventory: Creating... 2026-03-29 00:02:52.120110 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=85a475a558145c5c1daaf8020d251becc77f9b73] 2026-03-29 00:02:52.122963 | orchestrator | local_file.inventory: Creation complete after 0s [id=278293e87930c5c9fa6f44ab27a8badff59174b1] 2026-03-29 00:02:54.216319 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=04ef75af-f6c3-4dc1-b210-baa4a27c975c] 2026-03-29 00:02:59.850362 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-29 00:02:59.850548 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-29 00:02:59.851409 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-29 00:02:59.867777 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-29 00:02:59.869000 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-29 00:02:59.875358 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-29 00:03:09.859302 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-29 00:03:09.859410 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-29 00:03:09.859435 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-29 00:03:09.868825 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-29 00:03:09.869920 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-29 00:03:09.876275 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-29 00:03:19.868010 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-29 00:03:19.868133 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-29 00:03:19.868164 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-29 00:03:19.869237 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-29 00:03:19.870393 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-29 00:03:19.876923 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-29 00:03:21.738906 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=523315df-5fea-46c5-ba1a-f62563e732f8] 2026-03-29 00:03:29.875804 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-29 00:03:29.875901 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-29 00:03:29.875913 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-29 00:03:29.875932 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-29 00:03:29.877993 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-29 00:03:30.870545 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=3cbe5d27-503b-4761-845b-9205e6ce020f] 2026-03-29 00:03:30.892005 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=6712e380-0047-4ef5-83fb-d48cb50b7461] 2026-03-29 00:03:39.884988 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-29 00:03:39.885070 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-03-29 00:03:39.885086 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-29 00:03:49.890205 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-03-29 00:03:49.890305 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-03-29 00:03:49.890314 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m0s elapsed] 2026-03-29 00:03:51.094729 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m1s [id=e2950f05-49c5-4043-8257-92012bf3aa2a] 2026-03-29 00:03:59.899041 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [1m10s elapsed] 2026-03-29 00:03:59.899245 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m10s elapsed] 2026-03-29 00:04:01.665475 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 1m12s [id=c49d0e96-8e58-4a13-8fa6-d01ed27b424a] 2026-03-29 00:04:09.907095 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m20s elapsed] 2026-03-29 00:04:11.103976 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m21s [id=e430b1d3-e2b5-4135-abff-d0e6f8535bec] 2026-03-29 00:04:11.115166 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-29 00:04:11.121720 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4597467314006843342] 2026-03-29 00:04:11.125658 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-29 00:04:11.132656 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-29 00:04:11.135235 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-29 00:04:11.136066 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-29 00:04:11.137047 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-29 00:04:11.141769 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-29 00:04:11.142277 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-29 00:04:11.146919 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-29 00:04:11.152241 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-29 00:04:11.155978 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-29 00:04:14.504457 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=3cbe5d27-503b-4761-845b-9205e6ce020f/071e4fdb-2f21-4724-b6d5-ab202ed81b2c] 2026-03-29 00:04:14.539812 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=e430b1d3-e2b5-4135-abff-d0e6f8535bec/45f66f48-5092-4630-bbd4-e7a21fea6d53] 2026-03-29 00:04:14.544158 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=6712e380-0047-4ef5-83fb-d48cb50b7461/3634f6e0-2fc1-46dc-9b61-f009b476dcdf] 2026-03-29 00:04:14.568576 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=e430b1d3-e2b5-4135-abff-d0e6f8535bec/9c3edf6b-7c95-4460-bef4-a1ae8fb1460d] 2026-03-29 00:04:14.588139 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=3cbe5d27-503b-4761-845b-9205e6ce020f/eba3fb10-bf4f-42e8-8781-9c26ea140c89] 2026-03-29 00:04:14.774819 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=6712e380-0047-4ef5-83fb-d48cb50b7461/7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c] 2026-03-29 00:04:20.651692 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=6712e380-0047-4ef5-83fb-d48cb50b7461/f2ea4a06-d51a-493c-82b1-bac83ac89551] 2026-03-29 00:04:20.663109 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=3cbe5d27-503b-4761-845b-9205e6ce020f/3da40c29-5f2c-4690-a312-2dad3a63ee41] 2026-03-29 00:04:20.679901 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=e430b1d3-e2b5-4135-abff-d0e6f8535bec/637791c8-8ac8-49ce-9448-9b664b68bb9c] 2026-03-29 00:04:21.161331 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-29 00:04:31.170792 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-29 00:04:31.589668 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=b0236c85-9e61-474b-a025-d25e2f8a2210] 2026-03-29 00:04:31.785372 | orchestrator | 2026-03-29 00:04:31.785464 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-29 00:04:31.785478 | orchestrator | 2026-03-29 00:04:31.785487 | orchestrator | Outputs: 2026-03-29 00:04:31.785576 | orchestrator | 2026-03-29 00:04:31.785586 | orchestrator | manager_address = 2026-03-29 00:04:31.785595 | orchestrator | private_key = 2026-03-29 00:04:32.211227 | orchestrator | ok: Runtime: 0:02:04.237989 2026-03-29 00:04:32.231604 | 2026-03-29 00:04:32.231718 | TASK [Fetch manager address] 2026-03-29 00:04:32.708407 | orchestrator | ok 2026-03-29 00:04:32.719192 | 2026-03-29 00:04:32.719343 | TASK [Set manager_host address] 2026-03-29 00:04:32.802776 | orchestrator | ok 2026-03-29 00:04:32.810047 | 2026-03-29 00:04:32.810172 | LOOP [Update ansible collections] 2026-03-29 00:04:33.855661 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-29 00:04:33.855935 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:04:33.855975 | orchestrator | Starting galaxy collection install process 2026-03-29 00:04:33.855999 | orchestrator | Process install dependency map 2026-03-29 00:04:33.856022 | orchestrator | Starting collection install process 2026-03-29 00:04:33.856042 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-03-29 00:04:33.856067 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-03-29 00:04:33.856095 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-29 00:04:33.856142 | orchestrator | ok: Item: commons Runtime: 0:00:00.665302 2026-03-29 00:04:34.782153 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-29 00:04:34.782339 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:04:34.782390 | orchestrator | Starting galaxy collection install process 2026-03-29 00:04:34.782428 | orchestrator | Process install dependency map 2026-03-29 00:04:34.782484 | orchestrator | Starting collection install process 2026-03-29 00:04:34.782518 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-03-29 00:04:34.782553 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-03-29 00:04:34.782637 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-29 00:04:34.782696 | orchestrator | ok: Item: services Runtime: 0:00:00.652842 2026-03-29 00:04:34.814588 | 2026-03-29 00:04:34.814749 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-29 00:04:45.397796 | orchestrator | ok 2026-03-29 00:04:45.408652 | 2026-03-29 00:04:45.408793 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-29 00:05:45.455422 | orchestrator | ok 2026-03-29 00:05:45.465949 | 2026-03-29 00:05:45.466090 | TASK [Fetch manager ssh hostkey] 2026-03-29 00:05:47.045041 | orchestrator | Output suppressed because no_log was given 2026-03-29 00:05:47.052404 | 2026-03-29 00:05:47.052562 | TASK [Get ssh keypair from terraform environment] 2026-03-29 00:05:47.586958 | orchestrator | ok: Runtime: 0:00:00.006309 2026-03-29 00:05:47.602590 | 2026-03-29 00:05:47.602751 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-29 00:05:47.654695 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-29 00:05:47.666123 | 2026-03-29 00:05:47.666252 | TASK [Run manager part 0] 2026-03-29 00:05:48.838419 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:05:48.901654 | orchestrator | 2026-03-29 00:05:48.901728 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-29 00:05:48.901740 | orchestrator | 2026-03-29 00:05:48.901761 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-29 00:05:50.799521 | orchestrator | ok: [testbed-manager] 2026-03-29 00:05:50.799591 | orchestrator | 2026-03-29 00:05:50.799620 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-29 00:05:50.799633 | orchestrator | 2026-03-29 00:05:50.799645 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:05:52.865402 | orchestrator | ok: [testbed-manager] 2026-03-29 00:05:52.865478 | orchestrator | 2026-03-29 00:05:52.865491 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-29 00:05:53.557089 | orchestrator | ok: [testbed-manager] 2026-03-29 00:05:53.557209 | orchestrator | 2026-03-29 00:05:53.557223 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-29 00:05:53.606390 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:05:53.606435 | orchestrator | 2026-03-29 00:05:53.606537 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-29 00:05:53.642502 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:05:53.642555 | orchestrator | 2026-03-29 00:05:53.642563 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-29 00:05:53.692682 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:05:53.692742 | orchestrator | 2026-03-29 00:05:53.692752 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-29 00:05:54.394199 | orchestrator | changed: [testbed-manager] 2026-03-29 00:05:54.394266 | orchestrator | 2026-03-29 00:05:54.394276 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-29 00:08:50.529972 | orchestrator | changed: [testbed-manager] 2026-03-29 00:08:50.531830 | orchestrator | 2026-03-29 00:08:50.531865 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-29 00:10:23.418612 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:23.418686 | orchestrator | 2026-03-29 00:10:23.418706 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-29 00:10:47.888519 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:47.888588 | orchestrator | 2026-03-29 00:10:47.888605 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-29 00:10:57.303353 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:57.303445 | orchestrator | 2026-03-29 00:10:57.303464 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-29 00:10:57.357388 | orchestrator | ok: [testbed-manager] 2026-03-29 00:10:57.357476 | orchestrator | 2026-03-29 00:10:57.357494 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-29 00:10:58.162174 | orchestrator | ok: [testbed-manager] 2026-03-29 00:10:58.162335 | orchestrator | 2026-03-29 00:10:58.162360 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-29 00:10:58.908327 | orchestrator | changed: [testbed-manager] 2026-03-29 00:10:58.908407 | orchestrator | 2026-03-29 00:10:58.908421 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-29 00:11:05.085400 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:05.085518 | orchestrator | 2026-03-29 00:11:05.085547 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-29 00:11:12.530843 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:12.530889 | orchestrator | 2026-03-29 00:11:12.530897 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-29 00:11:15.281621 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:15.281698 | orchestrator | 2026-03-29 00:11:15.281710 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-29 00:11:16.976501 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:16.976613 | orchestrator | 2026-03-29 00:11:16.976631 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-29 00:11:18.101712 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-29 00:11:18.102633 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-29 00:11:18.102671 | orchestrator | 2026-03-29 00:11:18.102883 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-29 00:11:18.146262 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-29 00:11:18.146345 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-29 00:11:18.146360 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-29 00:11:18.146374 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-29 00:11:25.136757 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-29 00:11:25.136850 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-29 00:11:25.136864 | orchestrator | 2026-03-29 00:11:25.136877 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-29 00:11:25.709987 | orchestrator | changed: [testbed-manager] 2026-03-29 00:11:25.710131 | orchestrator | 2026-03-29 00:11:25.710149 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-29 00:14:47.181422 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-29 00:14:47.181503 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-29 00:14:47.181513 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-29 00:14:47.181519 | orchestrator | 2026-03-29 00:14:47.181527 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-29 00:14:49.396535 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-29 00:14:49.396593 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-29 00:14:49.396604 | orchestrator | 2026-03-29 00:14:49.396616 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-29 00:14:49.396626 | orchestrator | 2026-03-29 00:14:49.396635 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:14:50.747581 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:50.747676 | orchestrator | 2026-03-29 00:14:50.747693 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-29 00:14:50.798996 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:50.799052 | orchestrator | 2026-03-29 00:14:50.799061 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-29 00:14:50.877439 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:50.877493 | orchestrator | 2026-03-29 00:14:50.877503 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-29 00:14:51.643830 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:51.643902 | orchestrator | 2026-03-29 00:14:51.643911 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-29 00:14:52.322217 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:52.322282 | orchestrator | 2026-03-29 00:14:52.322298 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-29 00:14:53.610323 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-29 00:14:53.610361 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-29 00:14:53.610367 | orchestrator | 2026-03-29 00:14:53.610373 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-29 00:14:55.015575 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:55.015642 | orchestrator | 2026-03-29 00:14:55.015653 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-29 00:14:56.729859 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:14:56.729907 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-29 00:14:56.729921 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:14:56.729927 | orchestrator | 2026-03-29 00:14:56.729933 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-29 00:14:56.792752 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:56.792801 | orchestrator | 2026-03-29 00:14:56.792806 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-29 00:14:56.851331 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:56.851364 | orchestrator | 2026-03-29 00:14:56.851369 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-29 00:14:57.398507 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:57.398614 | orchestrator | 2026-03-29 00:14:57.398639 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-29 00:14:57.479929 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:57.480011 | orchestrator | 2026-03-29 00:14:57.480025 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-29 00:14:58.342578 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:14:58.342642 | orchestrator | changed: [testbed-manager] 2026-03-29 00:14:58.342723 | orchestrator | 2026-03-29 00:14:58.342733 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-29 00:14:58.369143 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:58.369214 | orchestrator | 2026-03-29 00:14:58.369232 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-29 00:14:58.411525 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:58.411605 | orchestrator | 2026-03-29 00:14:58.411616 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-29 00:14:58.454149 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:58.454205 | orchestrator | 2026-03-29 00:14:58.454212 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-29 00:14:58.529588 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:14:58.529648 | orchestrator | 2026-03-29 00:14:58.529658 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-29 00:14:59.253933 | orchestrator | ok: [testbed-manager] 2026-03-29 00:14:59.254076 | orchestrator | 2026-03-29 00:14:59.254098 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-29 00:14:59.254110 | orchestrator | 2026-03-29 00:14:59.254124 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:15:00.623885 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:00.623947 | orchestrator | 2026-03-29 00:15:00.623956 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-29 00:15:01.585353 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:01.585634 | orchestrator | 2026-03-29 00:15:01.585656 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:15:01.585669 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-03-29 00:15:01.585681 | orchestrator | 2026-03-29 00:15:02.165861 | orchestrator | ok: Runtime: 0:09:13.663802 2026-03-29 00:15:02.186086 | 2026-03-29 00:15:02.186241 | TASK [Point out that the log in on the manager is now possible] 2026-03-29 00:15:02.221277 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-29 00:15:02.229532 | 2026-03-29 00:15:02.229638 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-29 00:15:02.261654 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-29 00:15:02.269335 | 2026-03-29 00:15:02.269451 | TASK [Run manager part 1 + 2] 2026-03-29 00:15:03.922758 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-29 00:15:03.982121 | orchestrator | 2026-03-29 00:15:03.982170 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-29 00:15:03.982177 | orchestrator | 2026-03-29 00:15:03.982189 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:15:06.954080 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:06.954164 | orchestrator | 2026-03-29 00:15:06.954215 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-29 00:15:06.992305 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:15:06.992375 | orchestrator | 2026-03-29 00:15:06.992390 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-29 00:15:07.038984 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:07.039079 | orchestrator | 2026-03-29 00:15:07.039108 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 00:15:07.096096 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:07.096160 | orchestrator | 2026-03-29 00:15:07.096172 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 00:15:07.182346 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:07.182405 | orchestrator | 2026-03-29 00:15:07.182414 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 00:15:07.247459 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:07.247547 | orchestrator | 2026-03-29 00:15:07.247569 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 00:15:07.301109 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-29 00:15:07.301198 | orchestrator | 2026-03-29 00:15:07.301214 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 00:15:08.018050 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:08.018350 | orchestrator | 2026-03-29 00:15:08.018370 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 00:15:08.059725 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:15:08.059785 | orchestrator | 2026-03-29 00:15:08.059792 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 00:15:09.445463 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:09.445669 | orchestrator | 2026-03-29 00:15:09.445688 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 00:15:10.004856 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:10.004956 | orchestrator | 2026-03-29 00:15:10.004975 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 00:15:11.159726 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:11.159857 | orchestrator | 2026-03-29 00:15:11.159880 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 00:15:25.922313 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:25.922412 | orchestrator | 2026-03-29 00:15:25.922430 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-29 00:15:26.557691 | orchestrator | ok: [testbed-manager] 2026-03-29 00:15:26.559839 | orchestrator | 2026-03-29 00:15:26.559908 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-29 00:15:26.623527 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:15:26.623614 | orchestrator | 2026-03-29 00:15:26.623629 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-29 00:15:27.571726 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:27.571843 | orchestrator | 2026-03-29 00:15:27.571863 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-29 00:15:28.530242 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:28.530302 | orchestrator | 2026-03-29 00:15:28.530313 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-29 00:15:29.101692 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:29.101804 | orchestrator | 2026-03-29 00:15:29.101824 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-29 00:15:29.155971 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-29 00:15:29.156111 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-29 00:15:29.156140 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-29 00:15:29.156160 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-29 00:15:32.118325 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:32.118395 | orchestrator | 2026-03-29 00:15:32.118410 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-29 00:15:40.700197 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-29 00:15:40.700318 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-29 00:15:40.700544 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-29 00:15:40.700571 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-29 00:15:40.700601 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-29 00:15:40.700620 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-29 00:15:40.700639 | orchestrator | 2026-03-29 00:15:40.700659 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-29 00:15:41.750383 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:41.750478 | orchestrator | 2026-03-29 00:15:41.750663 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-29 00:15:44.769365 | orchestrator | changed: [testbed-manager] 2026-03-29 00:15:44.769410 | orchestrator | 2026-03-29 00:15:44.769418 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-29 00:15:44.815317 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:15:44.815400 | orchestrator | 2026-03-29 00:15:44.815416 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-29 00:17:21.190039 | orchestrator | changed: [testbed-manager] 2026-03-29 00:17:21.190086 | orchestrator | 2026-03-29 00:17:21.190092 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 00:17:22.266302 | orchestrator | ok: [testbed-manager] 2026-03-29 00:17:22.266338 | orchestrator | 2026-03-29 00:17:22.266345 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:17:22.266351 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-03-29 00:17:22.266355 | orchestrator | 2026-03-29 00:17:22.404741 | orchestrator | ok: Runtime: 0:02:19.780678 2026-03-29 00:17:22.415228 | 2026-03-29 00:17:22.415351 | TASK [Reboot manager] 2026-03-29 00:17:23.949258 | orchestrator | ok: Runtime: 0:00:01.004471 2026-03-29 00:17:23.961782 | 2026-03-29 00:17:23.962088 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-29 00:17:38.577410 | orchestrator | ok 2026-03-29 00:17:38.585043 | 2026-03-29 00:17:38.585161 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-29 00:18:38.630788 | orchestrator | ok 2026-03-29 00:18:38.641682 | 2026-03-29 00:18:38.641830 | TASK [Deploy manager + bootstrap nodes] 2026-03-29 00:18:41.086412 | orchestrator | 2026-03-29 00:18:41.086544 | orchestrator | # DEPLOY MANAGER 2026-03-29 00:18:41.086555 | orchestrator | 2026-03-29 00:18:41.086583 | orchestrator | + set -e 2026-03-29 00:18:41.086589 | orchestrator | + echo 2026-03-29 00:18:41.086595 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-29 00:18:41.086602 | orchestrator | + echo 2026-03-29 00:18:41.086624 | orchestrator | + cat /opt/manager-vars.sh 2026-03-29 00:18:41.090092 | orchestrator | export NUMBER_OF_NODES=6 2026-03-29 00:18:41.090133 | orchestrator | 2026-03-29 00:18:41.090138 | orchestrator | export CEPH_VERSION=reef 2026-03-29 00:18:41.090144 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-29 00:18:41.090149 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-29 00:18:41.090161 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-29 00:18:41.090165 | orchestrator | 2026-03-29 00:18:41.090173 | orchestrator | export ARA=false 2026-03-29 00:18:41.090177 | orchestrator | export DEPLOY_MODE=manager 2026-03-29 00:18:41.090184 | orchestrator | export TEMPEST=true 2026-03-29 00:18:41.090188 | orchestrator | export IS_ZUUL=true 2026-03-29 00:18:41.090192 | orchestrator | 2026-03-29 00:18:41.090199 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 00:18:41.090204 | orchestrator | export EXTERNAL_API=false 2026-03-29 00:18:41.090208 | orchestrator | 2026-03-29 00:18:41.090211 | orchestrator | export IMAGE_USER=ubuntu 2026-03-29 00:18:41.090218 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:41.090222 | orchestrator | 2026-03-29 00:18:41.090225 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-29 00:18:41.090441 | orchestrator | 2026-03-29 00:18:41.090459 | orchestrator | + echo 2026-03-29 00:18:41.090466 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 00:18:41.091342 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 00:18:41.091359 | orchestrator | ++ INTERACTIVE=false 2026-03-29 00:18:41.091397 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 00:18:41.091404 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 00:18:41.091627 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 00:18:41.091689 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 00:18:41.091695 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 00:18:41.091703 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 00:18:41.091706 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 00:18:41.091711 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 00:18:41.091715 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 00:18:41.091719 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 00:18:41.091723 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 00:18:41.091746 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 00:18:41.091757 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 00:18:41.091761 | orchestrator | ++ export ARA=false 2026-03-29 00:18:41.091765 | orchestrator | ++ ARA=false 2026-03-29 00:18:41.091841 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 00:18:41.091851 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 00:18:41.091855 | orchestrator | ++ export TEMPEST=true 2026-03-29 00:18:41.091858 | orchestrator | ++ TEMPEST=true 2026-03-29 00:18:41.091862 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 00:18:41.091866 | orchestrator | ++ IS_ZUUL=true 2026-03-29 00:18:41.091889 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 00:18:41.091894 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 00:18:41.091898 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 00:18:41.091902 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 00:18:41.091937 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 00:18:41.091945 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 00:18:41.091949 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:41.091953 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:41.091956 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 00:18:41.091960 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 00:18:41.091980 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-29 00:18:41.151914 | orchestrator | + docker version 2026-03-29 00:18:41.268367 | orchestrator | Client: Docker Engine - Community 2026-03-29 00:18:41.268459 | orchestrator | Version: 27.5.1 2026-03-29 00:18:41.268473 | orchestrator | API version: 1.47 2026-03-29 00:18:41.268534 | orchestrator | Go version: go1.22.11 2026-03-29 00:18:41.268547 | orchestrator | Git commit: 9f9e405 2026-03-29 00:18:41.268558 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-29 00:18:41.268570 | orchestrator | OS/Arch: linux/amd64 2026-03-29 00:18:41.268769 | orchestrator | Context: default 2026-03-29 00:18:41.268795 | orchestrator | 2026-03-29 00:18:41.268807 | orchestrator | Server: Docker Engine - Community 2026-03-29 00:18:41.268818 | orchestrator | Engine: 2026-03-29 00:18:41.268830 | orchestrator | Version: 27.5.1 2026-03-29 00:18:41.268842 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-29 00:18:41.268882 | orchestrator | Go version: go1.22.11 2026-03-29 00:18:41.268893 | orchestrator | Git commit: 4c9b3b0 2026-03-29 00:18:41.268904 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-29 00:18:41.268915 | orchestrator | OS/Arch: linux/amd64 2026-03-29 00:18:41.268925 | orchestrator | Experimental: false 2026-03-29 00:18:41.268936 | orchestrator | containerd: 2026-03-29 00:18:41.268947 | orchestrator | Version: v2.2.2 2026-03-29 00:18:41.268958 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-29 00:18:41.268970 | orchestrator | runc: 2026-03-29 00:18:41.269081 | orchestrator | Version: 1.3.4 2026-03-29 00:18:41.269097 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-29 00:18:41.269113 | orchestrator | docker-init: 2026-03-29 00:18:41.269131 | orchestrator | Version: 0.19.0 2026-03-29 00:18:41.269152 | orchestrator | GitCommit: de40ad0 2026-03-29 00:18:41.271537 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-29 00:18:41.278624 | orchestrator | + set -e 2026-03-29 00:18:41.278673 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 00:18:41.278692 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 00:18:41.278713 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 00:18:41.278730 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 00:18:41.278746 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 00:18:41.278757 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 00:18:41.278768 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 00:18:41.278778 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 00:18:41.278789 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 00:18:41.278800 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 00:18:41.278811 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 00:18:41.278821 | orchestrator | ++ export ARA=false 2026-03-29 00:18:41.278832 | orchestrator | ++ ARA=false 2026-03-29 00:18:41.278842 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 00:18:41.278853 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 00:18:41.278864 | orchestrator | ++ export TEMPEST=true 2026-03-29 00:18:41.278874 | orchestrator | ++ TEMPEST=true 2026-03-29 00:18:41.278885 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 00:18:41.278895 | orchestrator | ++ IS_ZUUL=true 2026-03-29 00:18:41.278906 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 00:18:41.278917 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 00:18:41.278927 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 00:18:41.278938 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 00:18:41.278948 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 00:18:41.278961 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 00:18:41.278980 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:41.278999 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 00:18:41.279018 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 00:18:41.279037 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 00:18:41.279053 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 00:18:41.279064 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 00:18:41.279075 | orchestrator | ++ INTERACTIVE=false 2026-03-29 00:18:41.279085 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 00:18:41.279101 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 00:18:41.279221 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-29 00:18:41.279242 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-29 00:18:41.284608 | orchestrator | + set -e 2026-03-29 00:18:41.284637 | orchestrator | + VERSION=9.5.0 2026-03-29 00:18:41.284651 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-29 00:18:41.293546 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-29 00:18:41.293637 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-29 00:18:41.297925 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-29 00:18:41.302674 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-29 00:18:41.310381 | orchestrator | /opt/configuration ~ 2026-03-29 00:18:41.310457 | orchestrator | + set -e 2026-03-29 00:18:41.310471 | orchestrator | + pushd /opt/configuration 2026-03-29 00:18:41.310484 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 00:18:41.312301 | orchestrator | + source /opt/venv/bin/activate 2026-03-29 00:18:41.313327 | orchestrator | ++ deactivate nondestructive 2026-03-29 00:18:41.313366 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:41.313382 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:41.313565 | orchestrator | ++ hash -r 2026-03-29 00:18:41.313591 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:41.313609 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-29 00:18:41.313628 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-29 00:18:41.313646 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-29 00:18:41.313797 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-29 00:18:41.313825 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-29 00:18:41.313837 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-29 00:18:41.313848 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-29 00:18:41.313933 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:18:41.313949 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:18:41.313960 | orchestrator | ++ export PATH 2026-03-29 00:18:41.314109 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:41.314206 | orchestrator | ++ '[' -z '' ']' 2026-03-29 00:18:41.314221 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-29 00:18:41.314238 | orchestrator | ++ PS1='(venv) ' 2026-03-29 00:18:41.314249 | orchestrator | ++ export PS1 2026-03-29 00:18:41.314260 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-29 00:18:41.314361 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-29 00:18:41.314388 | orchestrator | ++ hash -r 2026-03-29 00:18:41.314635 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-29 00:18:42.281017 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-29 00:18:42.281442 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-29 00:18:42.282832 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-29 00:18:42.284372 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-29 00:18:42.285614 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-29 00:18:42.295845 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-29 00:18:42.297564 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-29 00:18:42.298704 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-29 00:18:42.300160 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-29 00:18:42.335822 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-29 00:18:42.337466 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-29 00:18:42.339165 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-29 00:18:42.340531 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-29 00:18:42.344466 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-29 00:18:42.560430 | orchestrator | ++ which gilt 2026-03-29 00:18:42.564405 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-29 00:18:42.564465 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-29 00:18:42.808648 | orchestrator | osism.cfg-generics: 2026-03-29 00:18:42.924746 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-29 00:18:42.925639 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-29 00:18:42.926423 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-29 00:18:42.926480 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-29 00:18:43.642795 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-29 00:18:43.652526 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-29 00:18:43.977215 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-29 00:18:44.030851 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 00:18:44.030973 | orchestrator | + deactivate 2026-03-29 00:18:44.031002 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-29 00:18:44.031024 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:18:44.031043 | orchestrator | + export PATH 2026-03-29 00:18:44.031061 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-29 00:18:44.031073 | orchestrator | + '[' -n '' ']' 2026-03-29 00:18:44.031086 | orchestrator | + hash -r 2026-03-29 00:18:44.031206 | orchestrator | ~ 2026-03-29 00:18:44.031239 | orchestrator | + '[' -n '' ']' 2026-03-29 00:18:44.031252 | orchestrator | + unset VIRTUAL_ENV 2026-03-29 00:18:44.031263 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-29 00:18:44.031274 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-29 00:18:44.031285 | orchestrator | + unset -f deactivate 2026-03-29 00:18:44.031296 | orchestrator | + popd 2026-03-29 00:18:44.033046 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 00:18:44.033103 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-29 00:18:44.034417 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-29 00:18:44.097531 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 00:18:44.097631 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-29 00:18:44.098356 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 00:18:44.158649 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:18:44.159438 | orchestrator | ++ semver 2024.2 2025.1 2026-03-29 00:18:44.214108 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:18:44.214191 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-29 00:18:44.304401 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 00:18:44.304536 | orchestrator | + source /opt/venv/bin/activate 2026-03-29 00:18:44.304554 | orchestrator | ++ deactivate nondestructive 2026-03-29 00:18:44.304745 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:44.304764 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:44.304775 | orchestrator | ++ hash -r 2026-03-29 00:18:44.304786 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:44.304797 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-29 00:18:44.304808 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-29 00:18:44.304819 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-29 00:18:44.304831 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-29 00:18:44.304842 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-29 00:18:44.304853 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-29 00:18:44.304865 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-29 00:18:44.304876 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:18:44.304911 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:18:44.304922 | orchestrator | ++ export PATH 2026-03-29 00:18:44.304933 | orchestrator | ++ '[' -n '' ']' 2026-03-29 00:18:44.304944 | orchestrator | ++ '[' -z '' ']' 2026-03-29 00:18:44.304955 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-29 00:18:44.305073 | orchestrator | ++ PS1='(venv) ' 2026-03-29 00:18:44.305089 | orchestrator | ++ export PS1 2026-03-29 00:18:44.305101 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-29 00:18:44.305111 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-29 00:18:44.305122 | orchestrator | ++ hash -r 2026-03-29 00:18:44.305133 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-29 00:18:45.445955 | orchestrator | 2026-03-29 00:18:45.446096 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-29 00:18:45.446113 | orchestrator | 2026-03-29 00:18:45.446125 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 00:18:46.009726 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:46.009824 | orchestrator | 2026-03-29 00:18:46.009841 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-29 00:18:46.979965 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:46.980061 | orchestrator | 2026-03-29 00:18:46.980076 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-29 00:18:46.980119 | orchestrator | 2026-03-29 00:18:46.980131 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:18:49.304528 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:49.304629 | orchestrator | 2026-03-29 00:18:49.304642 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-29 00:18:49.348840 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:49.348921 | orchestrator | 2026-03-29 00:18:49.348934 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-29 00:18:49.809028 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:49.809125 | orchestrator | 2026-03-29 00:18:49.809143 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-29 00:18:49.855443 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:18:49.855561 | orchestrator | 2026-03-29 00:18:49.855578 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-29 00:18:50.184083 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:50.184200 | orchestrator | 2026-03-29 00:18:50.184226 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-29 00:18:50.527293 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:50.527392 | orchestrator | 2026-03-29 00:18:50.527407 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-29 00:18:50.639713 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:18:50.639815 | orchestrator | 2026-03-29 00:18:50.639831 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-29 00:18:50.639844 | orchestrator | 2026-03-29 00:18:50.639855 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:18:52.439925 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:52.439998 | orchestrator | 2026-03-29 00:18:52.440007 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-29 00:18:52.526568 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-29 00:18:52.526687 | orchestrator | 2026-03-29 00:18:52.526711 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-29 00:18:52.595539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-29 00:18:52.595618 | orchestrator | 2026-03-29 00:18:52.595628 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-29 00:18:53.692108 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-29 00:18:53.692201 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-29 00:18:53.692216 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-29 00:18:53.692229 | orchestrator | 2026-03-29 00:18:53.692244 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-29 00:18:55.475096 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-29 00:18:55.475219 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-29 00:18:55.475244 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-29 00:18:55.475265 | orchestrator | 2026-03-29 00:18:55.475285 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-29 00:18:56.128939 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:18:56.129043 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:56.129059 | orchestrator | 2026-03-29 00:18:56.129072 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-29 00:18:56.800492 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:18:56.800564 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:56.800572 | orchestrator | 2026-03-29 00:18:56.800576 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-29 00:18:56.848590 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:18:56.848674 | orchestrator | 2026-03-29 00:18:56.848687 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-29 00:18:57.188804 | orchestrator | ok: [testbed-manager] 2026-03-29 00:18:57.188905 | orchestrator | 2026-03-29 00:18:57.188922 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-29 00:18:57.252083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-29 00:18:57.252185 | orchestrator | 2026-03-29 00:18:57.252200 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-29 00:18:58.407028 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:58.407147 | orchestrator | 2026-03-29 00:18:58.407164 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-29 00:18:59.254158 | orchestrator | changed: [testbed-manager] 2026-03-29 00:18:59.254257 | orchestrator | 2026-03-29 00:18:59.254273 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-29 00:19:12.620662 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:12.620790 | orchestrator | 2026-03-29 00:19:12.620814 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-29 00:19:12.670014 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:12.670162 | orchestrator | 2026-03-29 00:19:12.670200 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-29 00:19:12.670213 | orchestrator | 2026-03-29 00:19:12.670226 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:19:14.496711 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:14.496813 | orchestrator | 2026-03-29 00:19:14.496829 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-29 00:19:14.613017 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-29 00:19:14.613143 | orchestrator | 2026-03-29 00:19:14.613160 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-29 00:19:14.671663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:19:14.671754 | orchestrator | 2026-03-29 00:19:14.671769 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-29 00:19:17.106146 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:17.106257 | orchestrator | 2026-03-29 00:19:17.106284 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-29 00:19:17.160093 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:17.160196 | orchestrator | 2026-03-29 00:19:17.160214 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-29 00:19:17.281851 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-29 00:19:17.281944 | orchestrator | 2026-03-29 00:19:17.281959 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-29 00:19:20.112329 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-29 00:19:20.112564 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-29 00:19:20.112596 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-29 00:19:20.112617 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-29 00:19:20.112635 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-29 00:19:20.112654 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-29 00:19:20.112668 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-29 00:19:20.112679 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-29 00:19:20.112691 | orchestrator | 2026-03-29 00:19:20.112703 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-29 00:19:20.751977 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:20.752077 | orchestrator | 2026-03-29 00:19:20.752095 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-29 00:19:21.403096 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:21.403220 | orchestrator | 2026-03-29 00:19:21.403248 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-29 00:19:21.479934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-29 00:19:21.480029 | orchestrator | 2026-03-29 00:19:21.480043 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-29 00:19:22.700208 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-29 00:19:22.700311 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-29 00:19:22.700326 | orchestrator | 2026-03-29 00:19:22.700339 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-29 00:19:23.342128 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:23.342230 | orchestrator | 2026-03-29 00:19:23.342245 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-29 00:19:23.397975 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:23.398122 | orchestrator | 2026-03-29 00:19:23.398135 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-29 00:19:23.473611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-29 00:19:23.473703 | orchestrator | 2026-03-29 00:19:23.473718 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-29 00:19:24.084807 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:24.084903 | orchestrator | 2026-03-29 00:19:24.084917 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-29 00:19:24.145145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-29 00:19:24.145240 | orchestrator | 2026-03-29 00:19:24.145255 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-29 00:19:25.424320 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:19:25.424498 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:19:25.424518 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:25.424531 | orchestrator | 2026-03-29 00:19:25.424544 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-29 00:19:26.030877 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:26.030997 | orchestrator | 2026-03-29 00:19:26.031013 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-29 00:19:26.084467 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:26.084569 | orchestrator | 2026-03-29 00:19:26.084585 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-29 00:19:26.188321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-29 00:19:26.188414 | orchestrator | 2026-03-29 00:19:26.188488 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-29 00:19:26.675891 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:26.675994 | orchestrator | 2026-03-29 00:19:26.676010 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-29 00:19:27.043575 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:27.043687 | orchestrator | 2026-03-29 00:19:27.043711 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-29 00:19:28.228757 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-29 00:19:28.228857 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-29 00:19:28.228872 | orchestrator | 2026-03-29 00:19:28.228885 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-29 00:19:28.985839 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:28.985940 | orchestrator | 2026-03-29 00:19:28.985955 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-29 00:19:29.357841 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:29.357937 | orchestrator | 2026-03-29 00:19:29.357953 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-29 00:19:29.734986 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:29.735100 | orchestrator | 2026-03-29 00:19:29.735123 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-29 00:19:29.777384 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:29.777566 | orchestrator | 2026-03-29 00:19:29.777595 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-29 00:19:29.843223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-29 00:19:29.843354 | orchestrator | 2026-03-29 00:19:29.843371 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-29 00:19:29.886934 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:29.887025 | orchestrator | 2026-03-29 00:19:29.887039 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-29 00:19:31.933953 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-29 00:19:31.934143 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-29 00:19:31.934163 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-29 00:19:31.934176 | orchestrator | 2026-03-29 00:19:31.934188 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-29 00:19:32.708845 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:32.708948 | orchestrator | 2026-03-29 00:19:32.708965 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-29 00:19:33.456099 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:33.456201 | orchestrator | 2026-03-29 00:19:33.456217 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-29 00:19:34.204523 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:34.204628 | orchestrator | 2026-03-29 00:19:34.204644 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-29 00:19:34.287467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-29 00:19:34.287554 | orchestrator | 2026-03-29 00:19:34.287569 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-29 00:19:34.339792 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:34.339890 | orchestrator | 2026-03-29 00:19:34.339907 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-29 00:19:35.178324 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-29 00:19:35.178489 | orchestrator | 2026-03-29 00:19:35.178508 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-29 00:19:35.272380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-29 00:19:35.272527 | orchestrator | 2026-03-29 00:19:35.272545 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-29 00:19:35.993498 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:35.993602 | orchestrator | 2026-03-29 00:19:35.993618 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-29 00:19:36.598957 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:36.599079 | orchestrator | 2026-03-29 00:19:36.599098 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-29 00:19:36.635440 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:19:36.635547 | orchestrator | 2026-03-29 00:19:36.635568 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-29 00:19:36.694821 | orchestrator | ok: [testbed-manager] 2026-03-29 00:19:36.694919 | orchestrator | 2026-03-29 00:19:36.694936 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-29 00:19:37.540214 | orchestrator | changed: [testbed-manager] 2026-03-29 00:19:37.540339 | orchestrator | 2026-03-29 00:19:37.540357 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-29 00:20:41.972239 | orchestrator | changed: [testbed-manager] 2026-03-29 00:20:41.972483 | orchestrator | 2026-03-29 00:20:41.972513 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-29 00:20:42.876919 | orchestrator | ok: [testbed-manager] 2026-03-29 00:20:42.877022 | orchestrator | 2026-03-29 00:20:42.877038 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-29 00:20:42.929957 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:20:42.930122 | orchestrator | 2026-03-29 00:20:42.930139 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-29 00:20:52.589822 | orchestrator | changed: [testbed-manager] 2026-03-29 00:20:52.589936 | orchestrator | 2026-03-29 00:20:52.589951 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-29 00:20:52.636838 | orchestrator | ok: [testbed-manager] 2026-03-29 00:20:52.636957 | orchestrator | 2026-03-29 00:20:52.636983 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 00:20:52.637003 | orchestrator | 2026-03-29 00:20:52.637021 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-29 00:20:52.753943 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:20:52.754133 | orchestrator | 2026-03-29 00:20:52.754155 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-29 00:21:52.815029 | orchestrator | Pausing for 60 seconds 2026-03-29 00:21:52.815162 | orchestrator | changed: [testbed-manager] 2026-03-29 00:21:52.815181 | orchestrator | 2026-03-29 00:21:52.815194 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-29 00:21:55.812309 | orchestrator | changed: [testbed-manager] 2026-03-29 00:21:55.812415 | orchestrator | 2026-03-29 00:21:55.812432 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-29 00:22:57.768271 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-29 00:22:57.768387 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-29 00:22:57.768423 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-29 00:22:57.768436 | orchestrator | changed: [testbed-manager] 2026-03-29 00:22:57.768449 | orchestrator | 2026-03-29 00:22:57.768461 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-29 00:23:07.534756 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:07.534882 | orchestrator | 2026-03-29 00:23:07.534902 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-29 00:23:07.617568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-29 00:23:07.617662 | orchestrator | 2026-03-29 00:23:07.617676 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-29 00:23:07.617689 | orchestrator | 2026-03-29 00:23:07.617700 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-29 00:23:07.656359 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:07.656449 | orchestrator | 2026-03-29 00:23:07.656468 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-29 00:23:07.724736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-29 00:23:07.724842 | orchestrator | 2026-03-29 00:23:07.724857 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-29 00:23:08.415261 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:08.416223 | orchestrator | 2026-03-29 00:23:08.416257 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-29 00:23:11.477904 | orchestrator | ok: [testbed-manager] 2026-03-29 00:23:11.478107 | orchestrator | 2026-03-29 00:23:11.478130 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-29 00:23:11.554364 | orchestrator | ok: [testbed-manager] => { 2026-03-29 00:23:11.554468 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-29 00:23:11.554484 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-29 00:23:11.554496 | orchestrator | "Checking running containers against expected versions...", 2026-03-29 00:23:11.554508 | orchestrator | "", 2026-03-29 00:23:11.554521 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-29 00:23:11.554532 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-29 00:23:11.554544 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.554555 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-29 00:23:11.554566 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.554577 | orchestrator | "", 2026-03-29 00:23:11.554588 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-29 00:23:11.554629 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-29 00:23:11.554642 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.554653 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-29 00:23:11.554663 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.554674 | orchestrator | "", 2026-03-29 00:23:11.554685 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-29 00:23:11.554695 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-29 00:23:11.554706 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.554717 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-29 00:23:11.554727 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.554738 | orchestrator | "", 2026-03-29 00:23:11.554749 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-29 00:23:11.554759 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-29 00:23:11.554770 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.554781 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-29 00:23:11.554791 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.554802 | orchestrator | "", 2026-03-29 00:23:11.554815 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-29 00:23:11.554826 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-29 00:23:11.554837 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.554848 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-29 00:23:11.554858 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.554869 | orchestrator | "", 2026-03-29 00:23:11.554880 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-29 00:23:11.554891 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.554903 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.554916 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.554928 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.554941 | orchestrator | "", 2026-03-29 00:23:11.554953 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-29 00:23:11.554966 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 00:23:11.554978 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.554991 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-29 00:23:11.555003 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555016 | orchestrator | "", 2026-03-29 00:23:11.555028 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-29 00:23:11.555040 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 00:23:11.555052 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555065 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-29 00:23:11.555077 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555090 | orchestrator | "", 2026-03-29 00:23:11.555102 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-29 00:23:11.555114 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-29 00:23:11.555126 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555138 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-29 00:23:11.555175 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555188 | orchestrator | "", 2026-03-29 00:23:11.555206 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-29 00:23:11.555224 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 00:23:11.555243 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555260 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-29 00:23:11.555276 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555294 | orchestrator | "", 2026-03-29 00:23:11.555311 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-29 00:23:11.555328 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555356 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555376 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555395 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555413 | orchestrator | "", 2026-03-29 00:23:11.555432 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-29 00:23:11.555444 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555454 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555465 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555475 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555487 | orchestrator | "", 2026-03-29 00:23:11.555498 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-29 00:23:11.555508 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555519 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555530 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555540 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555551 | orchestrator | "", 2026-03-29 00:23:11.555561 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-29 00:23:11.555572 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555582 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555593 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555626 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555644 | orchestrator | "", 2026-03-29 00:23:11.555661 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-29 00:23:11.555672 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555693 | orchestrator | " Enabled: true", 2026-03-29 00:23:11.555704 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-29 00:23:11.555715 | orchestrator | " Status: ✅ MATCH", 2026-03-29 00:23:11.555725 | orchestrator | "", 2026-03-29 00:23:11.555736 | orchestrator | "=== Summary ===", 2026-03-29 00:23:11.555747 | orchestrator | "Errors (version mismatches): 0", 2026-03-29 00:23:11.555758 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-29 00:23:11.555768 | orchestrator | "", 2026-03-29 00:23:11.555779 | orchestrator | "✅ All running containers match expected versions!" 2026-03-29 00:23:11.555790 | orchestrator | ] 2026-03-29 00:23:11.555800 | orchestrator | } 2026-03-29 00:23:11.555812 | orchestrator | 2026-03-29 00:23:11.555823 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-29 00:23:11.603581 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:11.603674 | orchestrator | 2026-03-29 00:23:11.603689 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:23:11.603702 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-29 00:23:11.603713 | orchestrator | 2026-03-29 00:23:11.701199 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-29 00:23:11.701294 | orchestrator | + deactivate 2026-03-29 00:23:11.701309 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-29 00:23:11.701321 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-29 00:23:11.701332 | orchestrator | + export PATH 2026-03-29 00:23:11.701343 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-29 00:23:11.701355 | orchestrator | + '[' -n '' ']' 2026-03-29 00:23:11.701366 | orchestrator | + hash -r 2026-03-29 00:23:11.701376 | orchestrator | + '[' -n '' ']' 2026-03-29 00:23:11.701387 | orchestrator | + unset VIRTUAL_ENV 2026-03-29 00:23:11.701398 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-29 00:23:11.701408 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-29 00:23:11.701419 | orchestrator | + unset -f deactivate 2026-03-29 00:23:11.701495 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-29 00:23:11.709019 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 00:23:11.709112 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-29 00:23:11.709227 | orchestrator | + local max_attempts=60 2026-03-29 00:23:11.709250 | orchestrator | + local name=ceph-ansible 2026-03-29 00:23:11.709269 | orchestrator | + local attempt_num=1 2026-03-29 00:23:11.709445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:23:11.745483 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:23:11.745576 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-29 00:23:11.745598 | orchestrator | + local max_attempts=60 2026-03-29 00:23:11.745618 | orchestrator | + local name=kolla-ansible 2026-03-29 00:23:11.745637 | orchestrator | + local attempt_num=1 2026-03-29 00:23:11.745968 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-29 00:23:11.786475 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:23:11.786557 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-29 00:23:11.786571 | orchestrator | + local max_attempts=60 2026-03-29 00:23:11.786582 | orchestrator | + local name=osism-ansible 2026-03-29 00:23:11.786594 | orchestrator | + local attempt_num=1 2026-03-29 00:23:11.787486 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-29 00:23:11.822993 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:23:11.823065 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-29 00:23:11.823077 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-29 00:23:12.493347 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-29 00:23:12.680018 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-29 00:23:12.680118 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-29 00:23:12.680135 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-29 00:23:12.680174 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-29 00:23:12.680246 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-29 00:23:12.680283 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-29 00:23:12.680295 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-29 00:23:12.680306 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-29 00:23:12.680317 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-29 00:23:12.680328 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-29 00:23:12.680339 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-29 00:23:12.680350 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-29 00:23:12.680360 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-29 00:23:12.680392 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-29 00:23:12.680404 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-29 00:23:12.680416 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-29 00:23:12.687671 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-29 00:23:12.746950 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 00:23:12.747041 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-29 00:23:12.750590 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-29 00:23:25.025699 | orchestrator | 2026-03-29 00:23:25 | INFO  | Task 840bf910-7e1e-488f-9b6c-d981f7f610b2 (resolvconf) was prepared for execution. 2026-03-29 00:23:25.025804 | orchestrator | 2026-03-29 00:23:25 | INFO  | It takes a moment until task 840bf910-7e1e-488f-9b6c-d981f7f610b2 (resolvconf) has been started and output is visible here. 2026-03-29 00:23:38.625663 | orchestrator | 2026-03-29 00:23:38.625823 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-29 00:23:38.625842 | orchestrator | 2026-03-29 00:23:38.625855 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:23:38.625866 | orchestrator | Sunday 29 March 2026 00:23:29 +0000 (0:00:00.107) 0:00:00.107 ********** 2026-03-29 00:23:38.625878 | orchestrator | ok: [testbed-manager] 2026-03-29 00:23:38.625890 | orchestrator | 2026-03-29 00:23:38.625901 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-29 00:23:38.625912 | orchestrator | Sunday 29 March 2026 00:23:32 +0000 (0:00:03.472) 0:00:03.579 ********** 2026-03-29 00:23:38.625923 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:38.625951 | orchestrator | 2026-03-29 00:23:38.625963 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-29 00:23:38.625973 | orchestrator | Sunday 29 March 2026 00:23:32 +0000 (0:00:00.048) 0:00:03.627 ********** 2026-03-29 00:23:38.625984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-29 00:23:38.625996 | orchestrator | 2026-03-29 00:23:38.626007 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-29 00:23:38.626162 | orchestrator | Sunday 29 March 2026 00:23:32 +0000 (0:00:00.075) 0:00:03.702 ********** 2026-03-29 00:23:38.626199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:23:38.626214 | orchestrator | 2026-03-29 00:23:38.626226 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-29 00:23:38.626239 | orchestrator | Sunday 29 March 2026 00:23:32 +0000 (0:00:00.057) 0:00:03.760 ********** 2026-03-29 00:23:38.626251 | orchestrator | ok: [testbed-manager] 2026-03-29 00:23:38.626263 | orchestrator | 2026-03-29 00:23:38.626275 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-29 00:23:38.626287 | orchestrator | Sunday 29 March 2026 00:23:33 +0000 (0:00:01.091) 0:00:04.852 ********** 2026-03-29 00:23:38.626299 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:38.626312 | orchestrator | 2026-03-29 00:23:38.626324 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-29 00:23:38.626335 | orchestrator | Sunday 29 March 2026 00:23:34 +0000 (0:00:00.065) 0:00:04.918 ********** 2026-03-29 00:23:38.626373 | orchestrator | ok: [testbed-manager] 2026-03-29 00:23:38.626386 | orchestrator | 2026-03-29 00:23:38.626399 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-29 00:23:38.626411 | orchestrator | Sunday 29 March 2026 00:23:34 +0000 (0:00:00.498) 0:00:05.417 ********** 2026-03-29 00:23:38.626423 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:23:38.626435 | orchestrator | 2026-03-29 00:23:38.626447 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-29 00:23:38.626460 | orchestrator | Sunday 29 March 2026 00:23:34 +0000 (0:00:00.090) 0:00:05.507 ********** 2026-03-29 00:23:38.626472 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:38.626484 | orchestrator | 2026-03-29 00:23:38.626496 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-29 00:23:38.626508 | orchestrator | Sunday 29 March 2026 00:23:35 +0000 (0:00:00.547) 0:00:06.054 ********** 2026-03-29 00:23:38.626520 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:38.626532 | orchestrator | 2026-03-29 00:23:38.626545 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-29 00:23:38.626557 | orchestrator | Sunday 29 March 2026 00:23:36 +0000 (0:00:01.070) 0:00:07.125 ********** 2026-03-29 00:23:38.626568 | orchestrator | ok: [testbed-manager] 2026-03-29 00:23:38.626578 | orchestrator | 2026-03-29 00:23:38.626589 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-29 00:23:38.626599 | orchestrator | Sunday 29 March 2026 00:23:37 +0000 (0:00:00.952) 0:00:08.078 ********** 2026-03-29 00:23:38.626610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-29 00:23:38.626621 | orchestrator | 2026-03-29 00:23:38.626632 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-29 00:23:38.626642 | orchestrator | Sunday 29 March 2026 00:23:37 +0000 (0:00:00.080) 0:00:08.158 ********** 2026-03-29 00:23:38.626653 | orchestrator | changed: [testbed-manager] 2026-03-29 00:23:38.626663 | orchestrator | 2026-03-29 00:23:38.626673 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:23:38.626685 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:23:38.626696 | orchestrator | 2026-03-29 00:23:38.626707 | orchestrator | 2026-03-29 00:23:38.626717 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:23:38.626728 | orchestrator | Sunday 29 March 2026 00:23:38 +0000 (0:00:01.120) 0:00:09.278 ********** 2026-03-29 00:23:38.626738 | orchestrator | =============================================================================== 2026-03-29 00:23:38.626749 | orchestrator | Gathering Facts --------------------------------------------------------- 3.47s 2026-03-29 00:23:38.626760 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2026-03-29 00:23:38.626770 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.09s 2026-03-29 00:23:38.626780 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2026-03-29 00:23:38.626791 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-03-29 00:23:38.626802 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-03-29 00:23:38.626833 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-03-29 00:23:38.626845 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-29 00:23:38.626856 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-29 00:23:38.626866 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-29 00:23:38.626877 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-29 00:23:38.626887 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-03-29 00:23:38.626905 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-03-29 00:23:38.916057 | orchestrator | + osism apply sshconfig 2026-03-29 00:23:50.892559 | orchestrator | 2026-03-29 00:23:50 | INFO  | Task f4a6f59e-8296-459d-aecd-87779cd93b1c (sshconfig) was prepared for execution. 2026-03-29 00:23:50.892670 | orchestrator | 2026-03-29 00:23:50 | INFO  | It takes a moment until task f4a6f59e-8296-459d-aecd-87779cd93b1c (sshconfig) has been started and output is visible here. 2026-03-29 00:24:01.716667 | orchestrator | 2026-03-29 00:24:01.716779 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-29 00:24:01.716796 | orchestrator | 2026-03-29 00:24:01.716831 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-29 00:24:01.716843 | orchestrator | Sunday 29 March 2026 00:23:54 +0000 (0:00:00.121) 0:00:00.121 ********** 2026-03-29 00:24:01.716854 | orchestrator | ok: [testbed-manager] 2026-03-29 00:24:01.716867 | orchestrator | 2026-03-29 00:24:01.716878 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-29 00:24:01.716889 | orchestrator | Sunday 29 March 2026 00:23:55 +0000 (0:00:00.488) 0:00:00.610 ********** 2026-03-29 00:24:01.716901 | orchestrator | changed: [testbed-manager] 2026-03-29 00:24:01.716913 | orchestrator | 2026-03-29 00:24:01.716923 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-29 00:24:01.716934 | orchestrator | Sunday 29 March 2026 00:23:55 +0000 (0:00:00.455) 0:00:01.065 ********** 2026-03-29 00:24:01.716945 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-29 00:24:01.716956 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-29 00:24:01.716967 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-29 00:24:01.716978 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-29 00:24:01.716989 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-29 00:24:01.716999 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-29 00:24:01.717010 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-29 00:24:01.717021 | orchestrator | 2026-03-29 00:24:01.717032 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-29 00:24:01.717043 | orchestrator | Sunday 29 March 2026 00:24:00 +0000 (0:00:05.203) 0:00:06.268 ********** 2026-03-29 00:24:01.717053 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:24:01.717064 | orchestrator | 2026-03-29 00:24:01.717075 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-29 00:24:01.717086 | orchestrator | Sunday 29 March 2026 00:24:00 +0000 (0:00:00.073) 0:00:06.342 ********** 2026-03-29 00:24:01.717097 | orchestrator | changed: [testbed-manager] 2026-03-29 00:24:01.717158 | orchestrator | 2026-03-29 00:24:01.717169 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:24:01.717181 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:24:01.717193 | orchestrator | 2026-03-29 00:24:01.717203 | orchestrator | 2026-03-29 00:24:01.717216 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:24:01.717230 | orchestrator | Sunday 29 March 2026 00:24:01 +0000 (0:00:00.538) 0:00:06.880 ********** 2026-03-29 00:24:01.717243 | orchestrator | =============================================================================== 2026-03-29 00:24:01.717255 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.20s 2026-03-29 00:24:01.717268 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-03-29 00:24:01.717282 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.49s 2026-03-29 00:24:01.717295 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2026-03-29 00:24:01.717309 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-03-29 00:24:01.989941 | orchestrator | + osism apply known-hosts 2026-03-29 00:24:14.031637 | orchestrator | 2026-03-29 00:24:14 | INFO  | Task 8de7acd4-1d07-46fb-a583-c1efd5c371c6 (known-hosts) was prepared for execution. 2026-03-29 00:24:14.031744 | orchestrator | 2026-03-29 00:24:14 | INFO  | It takes a moment until task 8de7acd4-1d07-46fb-a583-c1efd5c371c6 (known-hosts) has been started and output is visible here. 2026-03-29 00:24:30.577474 | orchestrator | 2026-03-29 00:24:30.577575 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-29 00:24:30.577589 | orchestrator | 2026-03-29 00:24:30.577600 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-29 00:24:30.577612 | orchestrator | Sunday 29 March 2026 00:24:18 +0000 (0:00:00.158) 0:00:00.158 ********** 2026-03-29 00:24:30.577622 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-29 00:24:30.577632 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-29 00:24:30.577642 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-29 00:24:30.577652 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-29 00:24:30.577662 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 00:24:30.577672 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 00:24:30.577681 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 00:24:30.577691 | orchestrator | 2026-03-29 00:24:30.577701 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-29 00:24:30.577712 | orchestrator | Sunday 29 March 2026 00:24:24 +0000 (0:00:05.857) 0:00:06.016 ********** 2026-03-29 00:24:30.577723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-29 00:24:30.577735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-29 00:24:30.577745 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-29 00:24:30.577754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-29 00:24:30.577764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-29 00:24:30.577783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-29 00:24:30.577794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-29 00:24:30.577803 | orchestrator | 2026-03-29 00:24:30.577813 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:30.577823 | orchestrator | Sunday 29 March 2026 00:24:24 +0000 (0:00:00.168) 0:00:06.185 ********** 2026-03-29 00:24:30.577833 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH3434k3t9lzSHeA2FyzO6keiCC1lFV0GyxLEsUakm7C) 2026-03-29 00:24:30.577851 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7CJDIy3PksiB2bU6bFySzZzvkhHxzBmIhfl0NpThHbp6SncZUMOxpRVqnY2lOm0UQMz2lYBVigEuohvwn0+afAM0xb4xPMZRxhtZ+dMNN8F0ukZKdzx2HRZt+9haykUkqN4H0lmHrcoC01fIWZjWzkvNUrSBuXyytd3xVDNC4nIQM/oDeI5CCIRYPaEyw/qMuYpeJvsSNoPWvL2PT1EvlUQUgAmL4m0Ch8ubH8VRz7a29BtSALHwTiPCaG4Y5gg6NPvU16bSQtW8AITJLNQfd0Fp6Zlok28bvh1uZJIWI4c8fVT0GFaYnUpuZOZdjl1ygT3u6XPsvZ40SGHd71uovNMS4bejRiZfJ6QhIGSFZKhzXB9N6HwDQ1tsoCambpUEyskNG3Y4PsWLn/YbS5mPpjvX6Vh8S8TQTuuYMqflkez0APJBy9b6ZY4pIon58YrfVYR4lj6+1PHUzmY1g109Wtoou+P6/EEN61f/eGDzfw14zXT7hxRCVOFDWWy+nev0=) 2026-03-29 00:24:30.577887 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFcr58ioYGaXHYk0Z7JWKh6tTh3q4iHK5rR5zAVCT7/uFs9k0SpHMvPPLmTX+NlIc8nq4ntIis4Jq7EKAaenlzs=) 2026-03-29 00:24:30.577899 | orchestrator | 2026-03-29 00:24:30.577910 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:30.577919 | orchestrator | Sunday 29 March 2026 00:24:25 +0000 (0:00:01.169) 0:00:07.354 ********** 2026-03-29 00:24:30.577946 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcd0AjGV3sF3DkmwX48650WEPvrtZ21S2Jydr1bsOx4KrWlEYVfUo/X+5lj5pe4BUYbYcD4dH3xxvj2GDnk09QzcMVLxqs/E8LRWrO/uTr56FaZZ1PnDxPm2QRmDkTQDv8vFD0xMbVHrGTEE/Rd1GJ+e+bsXsjtkZpowGurnKbSJ//omQzW6Fgnlh2aah9zuWPLHslt+Gs0SPk2NOHqEruwXy43VFyfe9OSJlHsfr9PTGFcsa0T3m0DJ6xfvZQfHdy6zbL0+66DkRrQhCDdOW0ZJWZiH5ZIQ7OgXAR/EKNW5Ae3S9Clq7gsJExFd0GBsmxS+vJPCvPQIznh/5E7VRgN9b37+lYuVpMJHjUzcKfHNV7Gd1GDvVaETEKLyZFdKSJqdmHbYe9lD0qsOSlvGE2MTWdrsftpGo368Sj4yZ60bYuRXjP9Rd95gBrhl0bfw9afc06NVFpDkLy52+rvLn6oFVM1oywxrlv0DJvMq/F9AC3m9ISMNlXXP2W1eevAhk=) 2026-03-29 00:24:30.577958 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLz7PLMTsw98O6sAA1bcUvyrCZO56sl98LOQZcgEGvBwjwXTwCx6R5jGH+hmBDwokf6MmhRuFUiF3nQfzGTSdFU=) 2026-03-29 00:24:30.577968 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH2DPeimt7ngTSemIwN8nlj5u9uVutAQmBp+BBy4Vm1+) 2026-03-29 00:24:30.577978 | orchestrator | 2026-03-29 00:24:30.577988 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:30.577997 | orchestrator | Sunday 29 March 2026 00:24:26 +0000 (0:00:01.019) 0:00:08.374 ********** 2026-03-29 00:24:30.578008 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEe9bqMicOgGj6eB8c+ggxKM7B8b3XH2/scU79qnlnS1qKVXnvWbFp9m1wMTfCEa0jYgCYsB7KS09JiKe0HgIlw=) 2026-03-29 00:24:30.578101 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH2YNqhFLldrUpSyVU2qTtEZxnYangiI3W47puZevRmnYYfp42E+4mo8owPgQCQig0unRBV1vY3tqUAbcKyZzJUFTDaDI6V7fW4U3h0t/ijxVgaR6B7x36Earm7QijhkVatyXaMK8jlf1tq8Bbj+2UJyIsqay5Vf6JTWH+Ct3SO+ANTHkkZWpQiWup53lnkhp4cDWlLMvUFHrUMf7r9dFR/E8lh+J/P/8SMkrcrAaHWG28gtqn4ZlDmOEutoux6C8ff0imgRfdXLxnXggbsvPtxsp0ixXn44nNMZZ8vp2LvTixJpA06RceSi4OlA6SvuWpbkvZSXdbsj7yrM8D8YCoUSZh/wHy++WeU/0F2YQxSSqpgYmNvA8YR9sNZgkSdZem7rU9576KmhT7laZNT01FRAxmiTik04loVWhW2bMN3bMTXTW72rNenmTSx8e4PU7v1kjrrfqZNxdOf7FTvanwcI08HouIp5lJZ9AFmEgtoAyVlossZFlj55jwBDDJMqc=) 2026-03-29 00:24:30.578116 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJWDzECzTgDGgoIds+do4siJaYwDvW9ANKk0eknULm+D) 2026-03-29 00:24:30.578127 | orchestrator | 2026-03-29 00:24:30.578138 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:30.578148 | orchestrator | Sunday 29 March 2026 00:24:27 +0000 (0:00:01.031) 0:00:09.405 ********** 2026-03-29 00:24:30.578159 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH7mu4aKdPqzXdys5//kjCn7xS1ERJyB8g8Yn0ubSBD96SwwkpYNRpL/jMCqqw/5TX4h7KDNNubGcgHxEISBww=) 2026-03-29 00:24:30.578171 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzeLtocvAl7AYMhn9xAhb05trLadrtn82TH9x97HIu9Ot6ok2L0IYsT9aFWQqQbru5l19swESg4EylzTtMtPPlG3jubMwT+kJxQop2jjEoTv/o0MyH8lNaacqM5mwlbHfFi7CD/tK/fTXFFTa4aa6vgb4tphWTgONYaP53Z0GqCB01N1/Ensyrf5GwcPegdWhF9PxNdbsY4AanJGGYItiGMJfbO1PoMCTeS4nmNR934L5OrVGCY8PWOdecyFGTOCNnO3ElHa/L9sHLpWa0qSLY/sgODKWQb1eCoKzeSQgqVW0KTXChBwO3fxvjManlRD5a2b5FxRWmUEJjdtTAAom1MwT5jcb5+jKVKgdS/I0LOEm1gY+ja/WeSF1pL7oBhEkWOwku0nvQwCBFSagDu/J+sixvUfgwGwkc+3FiMHaNrfM4LT+jTiby0HviVl7nFwWe1nLQT2fa+GC3I8YRGVpxMVJbbPzl7WRfrIXLnypbJifLvKzROu39SG8flsMGDe0=) 2026-03-29 00:24:30.578198 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICUmw2trHZQY/w+uxTOh/aIhqACCzlkYXstZ2EZa2zN5) 2026-03-29 00:24:30.578215 | orchestrator | 2026-03-29 00:24:30.578231 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:30.578257 | orchestrator | Sunday 29 March 2026 00:24:28 +0000 (0:00:01.065) 0:00:10.471 ********** 2026-03-29 00:24:30.578363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOEJN3GGwCuBCoxVC6DeZ0iyKwuPVNLVBDOcotHn8kss) 2026-03-29 00:24:30.578386 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE5u/18i9Ifw2IYqKtSZDNdCEFHWzzIUrU4Ajg/EXUEUTcTy3cd2JIemAsZj8HEOf8viX28VSi8Rbi1kfpnb0BcB17f1jobrXzuzVfFn/lL16MAUe5uD4NYS6kDZEmCCAgpvhq14AKsAFhF9SfbjpaMN0RBrz76VDQafvkglYq78CKEm0g5x4GErw5PGdtowCaTiQyAAztd0+XmaEs2ZqPuDZn18CLCuE7XHGi1X7jaE0C4HCAvbGI1brm7tUEEvafaFF0p7q5/NMh3SlylEoQuZ6BynJhWQbz8yQt3+FG6xAnX2/DhN9ixp3zXhZQp6FghA8r9tEtjfABGCDxfgUTrs6WS9bqwC4sI3FkQoWlqwroRz5Do57II75mr/wOa1IyIubEzl8u3A+YKVHW1osNarTI4oH6YiKSM3SaWiYnevmr39OYsXX5SvhI6MmZ95gWbO1p6BzYxYUNcSPN+L1qUxGDIpNkmiKz2T8qQodeVHVNmKDMAbLWi9la3PXaPWs=) 2026-03-29 00:24:30.578399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGG1ZLy6nQcCHfDIL/Eoc4qEDud6w978o68Pl/PueZyPqKg1O4PiJ4bzZVbRkradQRy9VNSLpxkySy+5LnKof0c=) 2026-03-29 00:24:30.578418 | orchestrator | 2026-03-29 00:24:30.578436 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:30.578454 | orchestrator | Sunday 29 March 2026 00:24:29 +0000 (0:00:01.082) 0:00:11.554 ********** 2026-03-29 00:24:30.578487 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNy5KDbPpBtMGsBk+fd96nK8fqMgEa17BkM/cJG4AvoRJG5FQIhtWw9mqEmL4RIYId4+7b4ZZOcnaV0HqJ7dtH4=) 2026-03-29 00:24:41.278955 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRu97AE3ZdfKGGyDT623N+2aWNEIQMpUWzarSlPqd6rMW7LHbhmVKRLjo0B03s7Sjqw7+yTDbvjl3PlwuvQRhdcgkKB88Vle8t/aKUdR59CBYYOX+NwPLdC+LhH4nQQ+RJPB5J4f9hZ0Ucg8lBiBei3Nk81+FnzDQ1DHfAqKYkdvSkedFeUKcDyfgHUO0Bk6e+o5DqC8zUh2ODb2Bee1ljcYGk9RW0PHLnmMRrPs22KLB0VSbILAlk1toUtJvbC23DSGz1CdLdzBRGDeybPXO68ZhacGE1GycNcruTQUlSLz2s4ZBoaNvZcznF3X4freiqc5l/knJS/Ef0gU9laArUlChpSMn1c4bi2GNQ3NIgBbXK7exciYvpfzgqQtrmv6dr0v5zWW0bowAVJUpue2lipTMdr+hluCGftUX7OVbI3aEeZtm0Bo+Ea4c35Wqm9B8LFOlCyh4H0TYA138x5ZJV0xqkfgw4VKC6Ca3UyvhrHCt9DDcGPOM1L/FSMoay2s0=) 2026-03-29 00:24:41.279090 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEzEW+27pQgEnx4tqlRDfhurIlyNqHL7y8nD66FCUXQu) 2026-03-29 00:24:41.279107 | orchestrator | 2026-03-29 00:24:41.279118 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:41.279129 | orchestrator | Sunday 29 March 2026 00:24:30 +0000 (0:00:01.015) 0:00:12.569 ********** 2026-03-29 00:24:41.279139 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAKlPNs7H3z9HgmpjHGPbEKTbgtEeE87encwMPhnLtZ4) 2026-03-29 00:24:41.279150 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5EBOAOsbqWh814eXR4J+f+0ObQAjJ9aASaQk+CYgnEXvoJNWXQ8i9qDMCUiSt9iUpBykzwIQKjZExfcHpk4FSqPcKqr5HULcndgk8mVwlDR4Krmc25hCbiZeNiHlm4cgJq5bWvrOi8H1xyfGM+1kR5LC2g6EwQnZv2ud7nIA8Pz1DtpVBYM3QCqh9w0UXyhUJCpjBhanxrQIkGYuTqIt0iO5LWQI9/j2Qg3KY6YlOBeMTJgG03A/t0bCoQzYJgWQyhkPJ0x9sFAaNM4N0PL7xROhDiUX3nKzlVKU/BuC6+XZA6nMeVMBbLvuY2ItekQpqpKDzP5PoOlaW7EOSbHW/xgbege8Wdygn1yDgUeRBp0KsuiUGm+2wVK3oLItI1ov2Cx3nXPyJyU6dpaPyk3MTOX+arex/TSoVZu7QsWdfPNWbH1G209GYPcJlhBfIRCqBLyunyVOE0Ef8KwlRAsiF3aomNO92ypuT6aq9fBi2Bx5s2Tgfdkc90/Bv689NEqs=) 2026-03-29 00:24:41.279183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGHw9S47RiSObalpw8+uFEM1Gdg36lRQtQOrxazUb4RtYOlY98wJMBmBVDIwleNWXbxjcR7RiDicnala7i+9Hd8=) 2026-03-29 00:24:41.279195 | orchestrator | 2026-03-29 00:24:41.279205 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-29 00:24:41.279216 | orchestrator | Sunday 29 March 2026 00:24:31 +0000 (0:00:01.065) 0:00:13.635 ********** 2026-03-29 00:24:41.279226 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-29 00:24:41.279236 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-29 00:24:41.279246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-29 00:24:41.279255 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-29 00:24:41.279264 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-29 00:24:41.279274 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-29 00:24:41.279284 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-29 00:24:41.279293 | orchestrator | 2026-03-29 00:24:41.279303 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-29 00:24:41.279314 | orchestrator | Sunday 29 March 2026 00:24:36 +0000 (0:00:05.246) 0:00:18.881 ********** 2026-03-29 00:24:41.279324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-29 00:24:41.279336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-29 00:24:41.279346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-29 00:24:41.279356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-29 00:24:41.279365 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-29 00:24:41.279375 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-29 00:24:41.279385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-29 00:24:41.279394 | orchestrator | 2026-03-29 00:24:41.279420 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:41.279430 | orchestrator | Sunday 29 March 2026 00:24:37 +0000 (0:00:00.201) 0:00:19.083 ********** 2026-03-29 00:24:41.279440 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH3434k3t9lzSHeA2FyzO6keiCC1lFV0GyxLEsUakm7C) 2026-03-29 00:24:41.279470 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7CJDIy3PksiB2bU6bFySzZzvkhHxzBmIhfl0NpThHbp6SncZUMOxpRVqnY2lOm0UQMz2lYBVigEuohvwn0+afAM0xb4xPMZRxhtZ+dMNN8F0ukZKdzx2HRZt+9haykUkqN4H0lmHrcoC01fIWZjWzkvNUrSBuXyytd3xVDNC4nIQM/oDeI5CCIRYPaEyw/qMuYpeJvsSNoPWvL2PT1EvlUQUgAmL4m0Ch8ubH8VRz7a29BtSALHwTiPCaG4Y5gg6NPvU16bSQtW8AITJLNQfd0Fp6Zlok28bvh1uZJIWI4c8fVT0GFaYnUpuZOZdjl1ygT3u6XPsvZ40SGHd71uovNMS4bejRiZfJ6QhIGSFZKhzXB9N6HwDQ1tsoCambpUEyskNG3Y4PsWLn/YbS5mPpjvX6Vh8S8TQTuuYMqflkez0APJBy9b6ZY4pIon58YrfVYR4lj6+1PHUzmY1g109Wtoou+P6/EEN61f/eGDzfw14zXT7hxRCVOFDWWy+nev0=) 2026-03-29 00:24:41.279481 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFcr58ioYGaXHYk0Z7JWKh6tTh3q4iHK5rR5zAVCT7/uFs9k0SpHMvPPLmTX+NlIc8nq4ntIis4Jq7EKAaenlzs=) 2026-03-29 00:24:41.279499 | orchestrator | 2026-03-29 00:24:41.279510 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:41.279522 | orchestrator | Sunday 29 March 2026 00:24:38 +0000 (0:00:01.089) 0:00:20.172 ********** 2026-03-29 00:24:41.279534 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCcd0AjGV3sF3DkmwX48650WEPvrtZ21S2Jydr1bsOx4KrWlEYVfUo/X+5lj5pe4BUYbYcD4dH3xxvj2GDnk09QzcMVLxqs/E8LRWrO/uTr56FaZZ1PnDxPm2QRmDkTQDv8vFD0xMbVHrGTEE/Rd1GJ+e+bsXsjtkZpowGurnKbSJ//omQzW6Fgnlh2aah9zuWPLHslt+Gs0SPk2NOHqEruwXy43VFyfe9OSJlHsfr9PTGFcsa0T3m0DJ6xfvZQfHdy6zbL0+66DkRrQhCDdOW0ZJWZiH5ZIQ7OgXAR/EKNW5Ae3S9Clq7gsJExFd0GBsmxS+vJPCvPQIznh/5E7VRgN9b37+lYuVpMJHjUzcKfHNV7Gd1GDvVaETEKLyZFdKSJqdmHbYe9lD0qsOSlvGE2MTWdrsftpGo368Sj4yZ60bYuRXjP9Rd95gBrhl0bfw9afc06NVFpDkLy52+rvLn6oFVM1oywxrlv0DJvMq/F9AC3m9ISMNlXXP2W1eevAhk=) 2026-03-29 00:24:41.279545 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLz7PLMTsw98O6sAA1bcUvyrCZO56sl98LOQZcgEGvBwjwXTwCx6R5jGH+hmBDwokf6MmhRuFUiF3nQfzGTSdFU=) 2026-03-29 00:24:41.279557 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH2DPeimt7ngTSemIwN8nlj5u9uVutAQmBp+BBy4Vm1+) 2026-03-29 00:24:41.279567 | orchestrator | 2026-03-29 00:24:41.279578 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:41.279590 | orchestrator | Sunday 29 March 2026 00:24:39 +0000 (0:00:01.013) 0:00:21.185 ********** 2026-03-29 00:24:41.279601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEe9bqMicOgGj6eB8c+ggxKM7B8b3XH2/scU79qnlnS1qKVXnvWbFp9m1wMTfCEa0jYgCYsB7KS09JiKe0HgIlw=) 2026-03-29 00:24:41.279612 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCH2YNqhFLldrUpSyVU2qTtEZxnYangiI3W47puZevRmnYYfp42E+4mo8owPgQCQig0unRBV1vY3tqUAbcKyZzJUFTDaDI6V7fW4U3h0t/ijxVgaR6B7x36Earm7QijhkVatyXaMK8jlf1tq8Bbj+2UJyIsqay5Vf6JTWH+Ct3SO+ANTHkkZWpQiWup53lnkhp4cDWlLMvUFHrUMf7r9dFR/E8lh+J/P/8SMkrcrAaHWG28gtqn4ZlDmOEutoux6C8ff0imgRfdXLxnXggbsvPtxsp0ixXn44nNMZZ8vp2LvTixJpA06RceSi4OlA6SvuWpbkvZSXdbsj7yrM8D8YCoUSZh/wHy++WeU/0F2YQxSSqpgYmNvA8YR9sNZgkSdZem7rU9576KmhT7laZNT01FRAxmiTik04loVWhW2bMN3bMTXTW72rNenmTSx8e4PU7v1kjrrfqZNxdOf7FTvanwcI08HouIp5lJZ9AFmEgtoAyVlossZFlj55jwBDDJMqc=) 2026-03-29 00:24:41.279624 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJWDzECzTgDGgoIds+do4siJaYwDvW9ANKk0eknULm+D) 2026-03-29 00:24:41.279635 | orchestrator | 2026-03-29 00:24:41.279646 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:41.279657 | orchestrator | Sunday 29 March 2026 00:24:40 +0000 (0:00:01.067) 0:00:22.253 ********** 2026-03-29 00:24:41.279668 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBH7mu4aKdPqzXdys5//kjCn7xS1ERJyB8g8Yn0ubSBD96SwwkpYNRpL/jMCqqw/5TX4h7KDNNubGcgHxEISBww=) 2026-03-29 00:24:41.279696 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzeLtocvAl7AYMhn9xAhb05trLadrtn82TH9x97HIu9Ot6ok2L0IYsT9aFWQqQbru5l19swESg4EylzTtMtPPlG3jubMwT+kJxQop2jjEoTv/o0MyH8lNaacqM5mwlbHfFi7CD/tK/fTXFFTa4aa6vgb4tphWTgONYaP53Z0GqCB01N1/Ensyrf5GwcPegdWhF9PxNdbsY4AanJGGYItiGMJfbO1PoMCTeS4nmNR934L5OrVGCY8PWOdecyFGTOCNnO3ElHa/L9sHLpWa0qSLY/sgODKWQb1eCoKzeSQgqVW0KTXChBwO3fxvjManlRD5a2b5FxRWmUEJjdtTAAom1MwT5jcb5+jKVKgdS/I0LOEm1gY+ja/WeSF1pL7oBhEkWOwku0nvQwCBFSagDu/J+sixvUfgwGwkc+3FiMHaNrfM4LT+jTiby0HviVl7nFwWe1nLQT2fa+GC3I8YRGVpxMVJbbPzl7WRfrIXLnypbJifLvKzROu39SG8flsMGDe0=) 2026-03-29 00:24:45.505750 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICUmw2trHZQY/w+uxTOh/aIhqACCzlkYXstZ2EZa2zN5) 2026-03-29 00:24:45.505896 | orchestrator | 2026-03-29 00:24:45.505951 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:45.505966 | orchestrator | Sunday 29 March 2026 00:24:41 +0000 (0:00:01.015) 0:00:23.268 ********** 2026-03-29 00:24:45.505980 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDE5u/18i9Ifw2IYqKtSZDNdCEFHWzzIUrU4Ajg/EXUEUTcTy3cd2JIemAsZj8HEOf8viX28VSi8Rbi1kfpnb0BcB17f1jobrXzuzVfFn/lL16MAUe5uD4NYS6kDZEmCCAgpvhq14AKsAFhF9SfbjpaMN0RBrz76VDQafvkglYq78CKEm0g5x4GErw5PGdtowCaTiQyAAztd0+XmaEs2ZqPuDZn18CLCuE7XHGi1X7jaE0C4HCAvbGI1brm7tUEEvafaFF0p7q5/NMh3SlylEoQuZ6BynJhWQbz8yQt3+FG6xAnX2/DhN9ixp3zXhZQp6FghA8r9tEtjfABGCDxfgUTrs6WS9bqwC4sI3FkQoWlqwroRz5Do57II75mr/wOa1IyIubEzl8u3A+YKVHW1osNarTI4oH6YiKSM3SaWiYnevmr39OYsXX5SvhI6MmZ95gWbO1p6BzYxYUNcSPN+L1qUxGDIpNkmiKz2T8qQodeVHVNmKDMAbLWi9la3PXaPWs=) 2026-03-29 00:24:45.505995 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGG1ZLy6nQcCHfDIL/Eoc4qEDud6w978o68Pl/PueZyPqKg1O4PiJ4bzZVbRkradQRy9VNSLpxkySy+5LnKof0c=) 2026-03-29 00:24:45.506008 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOEJN3GGwCuBCoxVC6DeZ0iyKwuPVNLVBDOcotHn8kss) 2026-03-29 00:24:45.506106 | orchestrator | 2026-03-29 00:24:45.506119 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:45.506130 | orchestrator | Sunday 29 March 2026 00:24:42 +0000 (0:00:01.040) 0:00:24.309 ********** 2026-03-29 00:24:45.506141 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNy5KDbPpBtMGsBk+fd96nK8fqMgEa17BkM/cJG4AvoRJG5FQIhtWw9mqEmL4RIYId4+7b4ZZOcnaV0HqJ7dtH4=) 2026-03-29 00:24:45.506153 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRu97AE3ZdfKGGyDT623N+2aWNEIQMpUWzarSlPqd6rMW7LHbhmVKRLjo0B03s7Sjqw7+yTDbvjl3PlwuvQRhdcgkKB88Vle8t/aKUdR59CBYYOX+NwPLdC+LhH4nQQ+RJPB5J4f9hZ0Ucg8lBiBei3Nk81+FnzDQ1DHfAqKYkdvSkedFeUKcDyfgHUO0Bk6e+o5DqC8zUh2ODb2Bee1ljcYGk9RW0PHLnmMRrPs22KLB0VSbILAlk1toUtJvbC23DSGz1CdLdzBRGDeybPXO68ZhacGE1GycNcruTQUlSLz2s4ZBoaNvZcznF3X4freiqc5l/knJS/Ef0gU9laArUlChpSMn1c4bi2GNQ3NIgBbXK7exciYvpfzgqQtrmv6dr0v5zWW0bowAVJUpue2lipTMdr+hluCGftUX7OVbI3aEeZtm0Bo+Ea4c35Wqm9B8LFOlCyh4H0TYA138x5ZJV0xqkfgw4VKC6Ca3UyvhrHCt9DDcGPOM1L/FSMoay2s0=) 2026-03-29 00:24:45.506164 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEzEW+27pQgEnx4tqlRDfhurIlyNqHL7y8nD66FCUXQu) 2026-03-29 00:24:45.506175 | orchestrator | 2026-03-29 00:24:45.506186 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-29 00:24:45.506197 | orchestrator | Sunday 29 March 2026 00:24:43 +0000 (0:00:01.024) 0:00:25.333 ********** 2026-03-29 00:24:45.506226 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5EBOAOsbqWh814eXR4J+f+0ObQAjJ9aASaQk+CYgnEXvoJNWXQ8i9qDMCUiSt9iUpBykzwIQKjZExfcHpk4FSqPcKqr5HULcndgk8mVwlDR4Krmc25hCbiZeNiHlm4cgJq5bWvrOi8H1xyfGM+1kR5LC2g6EwQnZv2ud7nIA8Pz1DtpVBYM3QCqh9w0UXyhUJCpjBhanxrQIkGYuTqIt0iO5LWQI9/j2Qg3KY6YlOBeMTJgG03A/t0bCoQzYJgWQyhkPJ0x9sFAaNM4N0PL7xROhDiUX3nKzlVKU/BuC6+XZA6nMeVMBbLvuY2ItekQpqpKDzP5PoOlaW7EOSbHW/xgbege8Wdygn1yDgUeRBp0KsuiUGm+2wVK3oLItI1ov2Cx3nXPyJyU6dpaPyk3MTOX+arex/TSoVZu7QsWdfPNWbH1G209GYPcJlhBfIRCqBLyunyVOE0Ef8KwlRAsiF3aomNO92ypuT6aq9fBi2Bx5s2Tgfdkc90/Bv689NEqs=) 2026-03-29 00:24:45.506239 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGHw9S47RiSObalpw8+uFEM1Gdg36lRQtQOrxazUb4RtYOlY98wJMBmBVDIwleNWXbxjcR7RiDicnala7i+9Hd8=) 2026-03-29 00:24:45.506250 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAKlPNs7H3z9HgmpjHGPbEKTbgtEeE87encwMPhnLtZ4) 2026-03-29 00:24:45.506261 | orchestrator | 2026-03-29 00:24:45.506272 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-29 00:24:45.506292 | orchestrator | Sunday 29 March 2026 00:24:44 +0000 (0:00:01.024) 0:00:26.358 ********** 2026-03-29 00:24:45.506306 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-29 00:24:45.506318 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-29 00:24:45.506330 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-29 00:24:45.506342 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-29 00:24:45.506355 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 00:24:45.506385 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 00:24:45.506399 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 00:24:45.506412 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:24:45.506424 | orchestrator | 2026-03-29 00:24:45.506436 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-29 00:24:45.506448 | orchestrator | Sunday 29 March 2026 00:24:44 +0000 (0:00:00.153) 0:00:26.512 ********** 2026-03-29 00:24:45.506460 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:24:45.506472 | orchestrator | 2026-03-29 00:24:45.506484 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-29 00:24:45.506494 | orchestrator | Sunday 29 March 2026 00:24:44 +0000 (0:00:00.059) 0:00:26.571 ********** 2026-03-29 00:24:45.506510 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:24:45.506521 | orchestrator | 2026-03-29 00:24:45.506532 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-29 00:24:45.506543 | orchestrator | Sunday 29 March 2026 00:24:44 +0000 (0:00:00.042) 0:00:26.614 ********** 2026-03-29 00:24:45.506553 | orchestrator | changed: [testbed-manager] 2026-03-29 00:24:45.506564 | orchestrator | 2026-03-29 00:24:45.506574 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:24:45.506585 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:24:45.506598 | orchestrator | 2026-03-29 00:24:45.506608 | orchestrator | 2026-03-29 00:24:45.506619 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:24:45.506629 | orchestrator | Sunday 29 March 2026 00:24:45 +0000 (0:00:00.678) 0:00:27.292 ********** 2026-03-29 00:24:45.506640 | orchestrator | =============================================================================== 2026-03-29 00:24:45.506651 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.86s 2026-03-29 00:24:45.506661 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.25s 2026-03-29 00:24:45.506673 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-29 00:24:45.506683 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-29 00:24:45.506694 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-29 00:24:45.506705 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-29 00:24:45.506715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-29 00:24:45.506726 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-29 00:24:45.506737 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-29 00:24:45.506747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-29 00:24:45.506758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-29 00:24:45.506768 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-29 00:24:45.506779 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-29 00:24:45.506790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-29 00:24:45.506808 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-29 00:24:45.506818 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-29 00:24:45.506829 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.68s 2026-03-29 00:24:45.506840 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-03-29 00:24:45.506851 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-29 00:24:45.506862 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-03-29 00:24:45.807755 | orchestrator | + osism apply squid 2026-03-29 00:24:57.708287 | orchestrator | 2026-03-29 00:24:57 | INFO  | Task 0792fef1-1dca-4ad4-9508-6e1546daa7b0 (squid) was prepared for execution. 2026-03-29 00:24:57.708383 | orchestrator | 2026-03-29 00:24:57 | INFO  | It takes a moment until task 0792fef1-1dca-4ad4-9508-6e1546daa7b0 (squid) has been started and output is visible here. 2026-03-29 00:26:55.310878 | orchestrator | 2026-03-29 00:26:55.311082 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-29 00:26:55.311112 | orchestrator | 2026-03-29 00:26:55.311132 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-29 00:26:55.311150 | orchestrator | Sunday 29 March 2026 00:25:01 +0000 (0:00:00.159) 0:00:00.159 ********** 2026-03-29 00:26:55.311162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:26:55.311173 | orchestrator | 2026-03-29 00:26:55.311184 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-29 00:26:55.311195 | orchestrator | Sunday 29 March 2026 00:25:01 +0000 (0:00:00.082) 0:00:00.242 ********** 2026-03-29 00:26:55.311206 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:55.311218 | orchestrator | 2026-03-29 00:26:55.311229 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-29 00:26:55.311240 | orchestrator | Sunday 29 March 2026 00:25:03 +0000 (0:00:01.405) 0:00:01.648 ********** 2026-03-29 00:26:55.311251 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-29 00:26:55.311262 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-29 00:26:55.311272 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-29 00:26:55.311283 | orchestrator | 2026-03-29 00:26:55.311294 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-29 00:26:55.311305 | orchestrator | Sunday 29 March 2026 00:25:04 +0000 (0:00:01.189) 0:00:02.838 ********** 2026-03-29 00:26:55.311315 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-29 00:26:55.311326 | orchestrator | 2026-03-29 00:26:55.311337 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-29 00:26:55.311347 | orchestrator | Sunday 29 March 2026 00:25:05 +0000 (0:00:01.060) 0:00:03.898 ********** 2026-03-29 00:26:55.311358 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:55.311369 | orchestrator | 2026-03-29 00:26:55.311380 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-29 00:26:55.311391 | orchestrator | Sunday 29 March 2026 00:25:05 +0000 (0:00:00.349) 0:00:04.248 ********** 2026-03-29 00:26:55.311405 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:55.311418 | orchestrator | 2026-03-29 00:26:55.311430 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-29 00:26:55.311442 | orchestrator | Sunday 29 March 2026 00:25:06 +0000 (0:00:00.880) 0:00:05.128 ********** 2026-03-29 00:26:55.311455 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-29 00:26:55.311472 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:55.311485 | orchestrator | 2026-03-29 00:26:55.311498 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-29 00:26:55.311538 | orchestrator | Sunday 29 March 2026 00:25:42 +0000 (0:00:35.516) 0:00:40.645 ********** 2026-03-29 00:26:55.311551 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:55.311564 | orchestrator | 2026-03-29 00:26:55.311576 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-29 00:26:55.311589 | orchestrator | Sunday 29 March 2026 00:25:54 +0000 (0:00:11.963) 0:00:52.608 ********** 2026-03-29 00:26:55.311601 | orchestrator | Pausing for 60 seconds 2026-03-29 00:26:55.311614 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:55.311627 | orchestrator | 2026-03-29 00:26:55.311639 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-29 00:26:55.311651 | orchestrator | Sunday 29 March 2026 00:26:54 +0000 (0:01:00.127) 0:01:52.736 ********** 2026-03-29 00:26:55.311663 | orchestrator | ok: [testbed-manager] 2026-03-29 00:26:55.311675 | orchestrator | 2026-03-29 00:26:55.311687 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-29 00:26:55.311701 | orchestrator | Sunday 29 March 2026 00:26:54 +0000 (0:00:00.079) 0:01:52.815 ********** 2026-03-29 00:26:55.311713 | orchestrator | changed: [testbed-manager] 2026-03-29 00:26:55.311725 | orchestrator | 2026-03-29 00:26:55.311737 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:26:55.311751 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:26:55.311762 | orchestrator | 2026-03-29 00:26:55.311773 | orchestrator | 2026-03-29 00:26:55.311784 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:26:55.311794 | orchestrator | Sunday 29 March 2026 00:26:55 +0000 (0:00:00.649) 0:01:53.465 ********** 2026-03-29 00:26:55.311805 | orchestrator | =============================================================================== 2026-03-29 00:26:55.311833 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.13s 2026-03-29 00:26:55.311845 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.52s 2026-03-29 00:26:55.311855 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.96s 2026-03-29 00:26:55.311866 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2026-03-29 00:26:55.311877 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-03-29 00:26:55.311887 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2026-03-29 00:26:55.311898 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2026-03-29 00:26:55.311908 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2026-03-29 00:26:55.311919 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-29 00:26:55.311955 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-03-29 00:26:55.311968 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-03-29 00:26:55.595001 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-29 00:26:55.595251 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 00:26:55.649171 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 00:26:55.649270 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-29 00:26:55.660837 | orchestrator | + set -e 2026-03-29 00:26:55.661032 | orchestrator | + NAMESPACE=kolla/release 2026-03-29 00:26:55.661060 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-29 00:26:55.666650 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-29 00:26:55.736905 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-29 00:26:55.737545 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-29 00:27:07.661507 | orchestrator | 2026-03-29 00:27:07 | INFO  | Task 3cfc0de2-34c5-4bb6-acd1-44e40038141d (operator) was prepared for execution. 2026-03-29 00:27:07.661613 | orchestrator | 2026-03-29 00:27:07 | INFO  | It takes a moment until task 3cfc0de2-34c5-4bb6-acd1-44e40038141d (operator) has been started and output is visible here. 2026-03-29 00:27:24.227339 | orchestrator | 2026-03-29 00:27:24.227452 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-29 00:27:24.227475 | orchestrator | 2026-03-29 00:27:24.227495 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 00:27:24.227522 | orchestrator | Sunday 29 March 2026 00:27:11 +0000 (0:00:00.137) 0:00:00.137 ********** 2026-03-29 00:27:24.227543 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:24.227562 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:24.227581 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:24.227599 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:24.227619 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:24.227637 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:24.227655 | orchestrator | 2026-03-29 00:27:24.227667 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-29 00:27:24.227678 | orchestrator | Sunday 29 March 2026 00:27:15 +0000 (0:00:04.299) 0:00:04.436 ********** 2026-03-29 00:27:24.227689 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:24.227700 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:24.227710 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:24.227738 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:24.227749 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:24.227760 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:24.227771 | orchestrator | 2026-03-29 00:27:24.227782 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-29 00:27:24.227792 | orchestrator | 2026-03-29 00:27:24.227803 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-29 00:27:24.227814 | orchestrator | Sunday 29 March 2026 00:27:16 +0000 (0:00:00.766) 0:00:05.203 ********** 2026-03-29 00:27:24.227825 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:24.227836 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:24.227846 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:24.227857 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:24.227867 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:24.227881 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:24.227893 | orchestrator | 2026-03-29 00:27:24.227933 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-29 00:27:24.227946 | orchestrator | Sunday 29 March 2026 00:27:16 +0000 (0:00:00.158) 0:00:05.361 ********** 2026-03-29 00:27:24.227958 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:27:24.227970 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:27:24.227983 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:27:24.227994 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:27:24.228006 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:27:24.228018 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:27:24.228030 | orchestrator | 2026-03-29 00:27:24.228043 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-29 00:27:24.228055 | orchestrator | Sunday 29 March 2026 00:27:17 +0000 (0:00:00.158) 0:00:05.519 ********** 2026-03-29 00:27:24.228068 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:24.228082 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:24.228095 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:24.228107 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:24.228119 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:24.228131 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:24.228144 | orchestrator | 2026-03-29 00:27:24.228156 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-29 00:27:24.228167 | orchestrator | Sunday 29 March 2026 00:27:17 +0000 (0:00:00.608) 0:00:06.128 ********** 2026-03-29 00:27:24.228178 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:24.228188 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:24.228199 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:24.228210 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:24.228221 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:24.228231 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:24.228265 | orchestrator | 2026-03-29 00:27:24.228277 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-29 00:27:24.228288 | orchestrator | Sunday 29 March 2026 00:27:18 +0000 (0:00:00.802) 0:00:06.931 ********** 2026-03-29 00:27:24.228298 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-29 00:27:24.228310 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-29 00:27:24.228320 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-29 00:27:24.228331 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-29 00:27:24.228342 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-29 00:27:24.228352 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-29 00:27:24.228363 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-29 00:27:24.228374 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-29 00:27:24.228385 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-29 00:27:24.228395 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-29 00:27:24.228406 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-29 00:27:24.228417 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-29 00:27:24.228428 | orchestrator | 2026-03-29 00:27:24.228439 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-29 00:27:24.228449 | orchestrator | Sunday 29 March 2026 00:27:19 +0000 (0:00:01.192) 0:00:08.124 ********** 2026-03-29 00:27:24.228460 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:24.228471 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:24.228482 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:24.228493 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:24.228506 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:24.228525 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:24.228543 | orchestrator | 2026-03-29 00:27:24.228574 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-29 00:27:24.228592 | orchestrator | Sunday 29 March 2026 00:27:20 +0000 (0:00:01.168) 0:00:09.293 ********** 2026-03-29 00:27:24.228611 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-29 00:27:24.228628 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-29 00:27:24.228646 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-29 00:27:24.228665 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:27:24.228708 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:27:24.228728 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:27:24.228749 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:27:24.228770 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:27:24.228788 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-29 00:27:24.228807 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-29 00:27:24.228826 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-29 00:27:24.228844 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-29 00:27:24.228863 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-29 00:27:24.228882 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-29 00:27:24.228926 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-29 00:27:24.228947 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:27:24.228964 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:27:24.228982 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:27:24.228999 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:27:24.229018 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:27:24.229051 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-29 00:27:24.229070 | orchestrator | 2026-03-29 00:27:24.229088 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-29 00:27:24.229108 | orchestrator | Sunday 29 March 2026 00:27:22 +0000 (0:00:01.278) 0:00:10.571 ********** 2026-03-29 00:27:24.229126 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:24.229144 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:24.229160 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:24.229176 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:24.229191 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:24.229206 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:24.229222 | orchestrator | 2026-03-29 00:27:24.229237 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-29 00:27:24.229253 | orchestrator | Sunday 29 March 2026 00:27:22 +0000 (0:00:00.132) 0:00:10.703 ********** 2026-03-29 00:27:24.229269 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:24.229285 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:24.229304 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:24.229323 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:24.229340 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:24.229358 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:24.229377 | orchestrator | 2026-03-29 00:27:24.229395 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-29 00:27:24.229414 | orchestrator | Sunday 29 March 2026 00:27:22 +0000 (0:00:00.181) 0:00:10.885 ********** 2026-03-29 00:27:24.229434 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:24.229453 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:24.229471 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:24.229489 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:24.229507 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:24.229525 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:24.229544 | orchestrator | 2026-03-29 00:27:24.229563 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-29 00:27:24.229581 | orchestrator | Sunday 29 March 2026 00:27:23 +0000 (0:00:00.580) 0:00:11.465 ********** 2026-03-29 00:27:24.229600 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:24.229618 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:24.229637 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:24.229654 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:24.229689 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:24.229710 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:24.229730 | orchestrator | 2026-03-29 00:27:24.229749 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-29 00:27:24.229770 | orchestrator | Sunday 29 March 2026 00:27:23 +0000 (0:00:00.165) 0:00:11.631 ********** 2026-03-29 00:27:24.229791 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-29 00:27:24.229811 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:27:24.229828 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:24.229847 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:24.229865 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-29 00:27:24.229884 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:24.229928 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:27:24.229946 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:24.229963 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:27:24.229981 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:24.229998 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:27:24.230085 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:24.230110 | orchestrator | 2026-03-29 00:27:24.230129 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-29 00:27:24.230148 | orchestrator | Sunday 29 March 2026 00:27:23 +0000 (0:00:00.716) 0:00:12.347 ********** 2026-03-29 00:27:24.230183 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:24.230202 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:24.230222 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:24.230241 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:24.230261 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:24.230281 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:24.230337 | orchestrator | 2026-03-29 00:27:24.230357 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-29 00:27:24.230377 | orchestrator | Sunday 29 March 2026 00:27:24 +0000 (0:00:00.161) 0:00:12.509 ********** 2026-03-29 00:27:24.230397 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:24.230417 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:24.230437 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:24.230457 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:24.230498 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:25.629483 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:25.629583 | orchestrator | 2026-03-29 00:27:25.629600 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-29 00:27:25.629613 | orchestrator | Sunday 29 March 2026 00:27:24 +0000 (0:00:00.150) 0:00:12.659 ********** 2026-03-29 00:27:25.629624 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:25.629635 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:25.629645 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:25.629656 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:25.629667 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:25.629677 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:25.629688 | orchestrator | 2026-03-29 00:27:25.629699 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-29 00:27:25.629709 | orchestrator | Sunday 29 March 2026 00:27:24 +0000 (0:00:00.149) 0:00:12.809 ********** 2026-03-29 00:27:25.629720 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:27:25.629730 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:27:25.629759 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:27:25.629770 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:27:25.629781 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:27:25.629791 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:27:25.629802 | orchestrator | 2026-03-29 00:27:25.629813 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-29 00:27:25.629823 | orchestrator | Sunday 29 March 2026 00:27:25 +0000 (0:00:00.815) 0:00:13.624 ********** 2026-03-29 00:27:25.629834 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:27:25.629844 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:27:25.629856 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:27:25.629867 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:27:25.629877 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:27:25.629888 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:27:25.629970 | orchestrator | 2026-03-29 00:27:25.629992 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:27:25.630076 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:27:25.630108 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:27:25.630129 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:27:25.630151 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:27:25.630171 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:27:25.630211 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 00:27:25.630225 | orchestrator | 2026-03-29 00:27:25.630238 | orchestrator | 2026-03-29 00:27:25.630251 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:27:25.630263 | orchestrator | Sunday 29 March 2026 00:27:25 +0000 (0:00:00.205) 0:00:13.830 ********** 2026-03-29 00:27:25.630276 | orchestrator | =============================================================================== 2026-03-29 00:27:25.630287 | orchestrator | Gathering Facts --------------------------------------------------------- 4.30s 2026-03-29 00:27:25.630300 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2026-03-29 00:27:25.630313 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-03-29 00:27:25.630325 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.17s 2026-03-29 00:27:25.630338 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.82s 2026-03-29 00:27:25.630350 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-03-29 00:27:25.630362 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-03-29 00:27:25.630374 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-03-29 00:27:25.630386 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-03-29 00:27:25.630397 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2026-03-29 00:27:25.630407 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-03-29 00:27:25.630418 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.18s 2026-03-29 00:27:25.630429 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-03-29 00:27:25.630440 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-03-29 00:27:25.630450 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-03-29 00:27:25.630461 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2026-03-29 00:27:25.630472 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2026-03-29 00:27:25.630482 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2026-03-29 00:27:25.630493 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.13s 2026-03-29 00:27:25.968562 | orchestrator | + osism apply --environment custom facts 2026-03-29 00:27:27.868157 | orchestrator | 2026-03-29 00:27:27 | INFO  | Trying to run play facts in environment custom 2026-03-29 00:27:38.111085 | orchestrator | 2026-03-29 00:27:38 | INFO  | Task 369c8c35-18f7-4ed8-a6c4-b6defb4e7f9b (facts) was prepared for execution. 2026-03-29 00:27:38.111174 | orchestrator | 2026-03-29 00:27:38 | INFO  | It takes a moment until task 369c8c35-18f7-4ed8-a6c4-b6defb4e7f9b (facts) has been started and output is visible here. 2026-03-29 00:28:26.657019 | orchestrator | 2026-03-29 00:28:26.657108 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-29 00:28:26.657118 | orchestrator | 2026-03-29 00:28:26.657125 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 00:28:26.657132 | orchestrator | Sunday 29 March 2026 00:27:42 +0000 (0:00:00.085) 0:00:00.085 ********** 2026-03-29 00:28:26.657139 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:26.657146 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:26.657152 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:26.657158 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:26.657164 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:26.657170 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:26.657193 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:26.657199 | orchestrator | 2026-03-29 00:28:26.657206 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-29 00:28:26.657211 | orchestrator | Sunday 29 March 2026 00:27:43 +0000 (0:00:01.361) 0:00:01.447 ********** 2026-03-29 00:28:26.657313 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:26.657321 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:26.657327 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:26.657333 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:26.657339 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:26.657344 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:26.657350 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:26.657355 | orchestrator | 2026-03-29 00:28:26.657361 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-29 00:28:26.657367 | orchestrator | 2026-03-29 00:28:26.657373 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 00:28:26.657379 | orchestrator | Sunday 29 March 2026 00:27:44 +0000 (0:00:01.379) 0:00:02.826 ********** 2026-03-29 00:28:26.657384 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.657390 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.657396 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.657404 | orchestrator | 2026-03-29 00:28:26.657414 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 00:28:26.657424 | orchestrator | Sunday 29 March 2026 00:27:44 +0000 (0:00:00.086) 0:00:02.913 ********** 2026-03-29 00:28:26.657433 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.657441 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.657450 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.657459 | orchestrator | 2026-03-29 00:28:26.657467 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 00:28:26.657476 | orchestrator | Sunday 29 March 2026 00:27:45 +0000 (0:00:00.192) 0:00:03.105 ********** 2026-03-29 00:28:26.657485 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.657495 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.657504 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.657513 | orchestrator | 2026-03-29 00:28:26.657521 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 00:28:26.657528 | orchestrator | Sunday 29 March 2026 00:27:45 +0000 (0:00:00.223) 0:00:03.329 ********** 2026-03-29 00:28:26.657535 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:28:26.657542 | orchestrator | 2026-03-29 00:28:26.657549 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 00:28:26.657555 | orchestrator | Sunday 29 March 2026 00:27:45 +0000 (0:00:00.145) 0:00:03.475 ********** 2026-03-29 00:28:26.657560 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.657566 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.657571 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.657577 | orchestrator | 2026-03-29 00:28:26.657584 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 00:28:26.657590 | orchestrator | Sunday 29 March 2026 00:27:46 +0000 (0:00:00.562) 0:00:04.037 ********** 2026-03-29 00:28:26.657597 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:26.657603 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:26.657610 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:26.657616 | orchestrator | 2026-03-29 00:28:26.657622 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 00:28:26.657629 | orchestrator | Sunday 29 March 2026 00:27:46 +0000 (0:00:00.138) 0:00:04.176 ********** 2026-03-29 00:28:26.657635 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:26.657641 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:26.657650 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:26.657660 | orchestrator | 2026-03-29 00:28:26.657669 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 00:28:26.657739 | orchestrator | Sunday 29 March 2026 00:27:47 +0000 (0:00:01.202) 0:00:05.378 ********** 2026-03-29 00:28:26.657747 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.657754 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.657760 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.657766 | orchestrator | 2026-03-29 00:28:26.657773 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 00:28:26.657818 | orchestrator | Sunday 29 March 2026 00:27:47 +0000 (0:00:00.477) 0:00:05.856 ********** 2026-03-29 00:28:26.657825 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:26.657832 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:26.657838 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:26.657867 | orchestrator | 2026-03-29 00:28:26.657875 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 00:28:26.657882 | orchestrator | Sunday 29 March 2026 00:27:48 +0000 (0:00:01.093) 0:00:06.949 ********** 2026-03-29 00:28:26.657888 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:26.657895 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:26.657901 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:26.657908 | orchestrator | 2026-03-29 00:28:26.657914 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-29 00:28:26.657921 | orchestrator | Sunday 29 March 2026 00:28:06 +0000 (0:00:17.631) 0:00:24.581 ********** 2026-03-29 00:28:26.657927 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:26.657934 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:26.657940 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:26.657947 | orchestrator | 2026-03-29 00:28:26.657953 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-29 00:28:26.657974 | orchestrator | Sunday 29 March 2026 00:28:06 +0000 (0:00:00.100) 0:00:24.682 ********** 2026-03-29 00:28:26.657980 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:26.657986 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:26.657992 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:26.657997 | orchestrator | 2026-03-29 00:28:26.658003 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-29 00:28:26.658054 | orchestrator | Sunday 29 March 2026 00:28:16 +0000 (0:00:09.311) 0:00:33.993 ********** 2026-03-29 00:28:26.658062 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.658068 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.658074 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.658080 | orchestrator | 2026-03-29 00:28:26.658085 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-29 00:28:26.658091 | orchestrator | Sunday 29 March 2026 00:28:16 +0000 (0:00:00.503) 0:00:34.497 ********** 2026-03-29 00:28:26.658097 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-29 00:28:26.658104 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-29 00:28:26.658109 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-29 00:28:26.658115 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-29 00:28:26.658121 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-29 00:28:26.658127 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-29 00:28:26.658132 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-29 00:28:26.658271 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-29 00:28:26.658277 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-29 00:28:26.658283 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-29 00:28:26.658289 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-29 00:28:26.658295 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-29 00:28:26.658300 | orchestrator | 2026-03-29 00:28:26.658306 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 00:28:26.658319 | orchestrator | Sunday 29 March 2026 00:28:20 +0000 (0:00:03.869) 0:00:38.366 ********** 2026-03-29 00:28:26.658325 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.658330 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.658336 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.658342 | orchestrator | 2026-03-29 00:28:26.658347 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:28:26.658353 | orchestrator | 2026-03-29 00:28:26.658359 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:28:26.658365 | orchestrator | Sunday 29 March 2026 00:28:21 +0000 (0:00:01.519) 0:00:39.886 ********** 2026-03-29 00:28:26.658370 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:26.658376 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:26.658382 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:26.658387 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:26.658393 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:26.658399 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:26.658404 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:26.658410 | orchestrator | 2026-03-29 00:28:26.658416 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:28:26.658423 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:28:26.658429 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:28:26.658436 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:28:26.658442 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:28:26.658448 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:28:26.658454 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:28:26.658460 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:28:26.658465 | orchestrator | 2026-03-29 00:28:26.658471 | orchestrator | 2026-03-29 00:28:26.658477 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:28:26.658482 | orchestrator | Sunday 29 March 2026 00:28:26 +0000 (0:00:04.717) 0:00:44.604 ********** 2026-03-29 00:28:26.658488 | orchestrator | =============================================================================== 2026-03-29 00:28:26.658494 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.63s 2026-03-29 00:28:26.658500 | orchestrator | Install required packages (Debian) -------------------------------------- 9.31s 2026-03-29 00:28:26.658505 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2026-03-29 00:28:26.658511 | orchestrator | Copy fact files --------------------------------------------------------- 3.87s 2026-03-29 00:28:26.658516 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.52s 2026-03-29 00:28:26.658522 | orchestrator | Copy fact file ---------------------------------------------------------- 1.38s 2026-03-29 00:28:26.658534 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-03-29 00:28:26.889596 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.20s 2026-03-29 00:28:26.889723 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-03-29 00:28:26.889768 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.56s 2026-03-29 00:28:26.889790 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2026-03-29 00:28:26.889884 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2026-03-29 00:28:26.889900 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-29 00:28:26.889910 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-03-29 00:28:26.889921 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-03-29 00:28:26.889933 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-29 00:28:26.889944 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-03-29 00:28:26.889954 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-03-29 00:28:27.178496 | orchestrator | + osism apply bootstrap 2026-03-29 00:28:39.306785 | orchestrator | 2026-03-29 00:28:39 | INFO  | Task 63c77bc4-a7be-434a-a72d-9970d42deced (bootstrap) was prepared for execution. 2026-03-29 00:28:39.307007 | orchestrator | 2026-03-29 00:28:39 | INFO  | It takes a moment until task 63c77bc4-a7be-434a-a72d-9970d42deced (bootstrap) has been started and output is visible here. 2026-03-29 00:28:56.206100 | orchestrator | 2026-03-29 00:28:56.206153 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-29 00:28:56.206159 | orchestrator | 2026-03-29 00:28:56.206165 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-29 00:28:56.206172 | orchestrator | Sunday 29 March 2026 00:28:43 +0000 (0:00:00.117) 0:00:00.117 ********** 2026-03-29 00:28:56.206178 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:56.206185 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:56.206191 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:56.206197 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:56.206203 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:56.206209 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:56.206216 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:56.206222 | orchestrator | 2026-03-29 00:28:56.206229 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:28:56.206233 | orchestrator | 2026-03-29 00:28:56.206237 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:28:56.206241 | orchestrator | Sunday 29 March 2026 00:28:43 +0000 (0:00:00.174) 0:00:00.291 ********** 2026-03-29 00:28:56.206245 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:56.206249 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:56.206253 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:56.206256 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:56.206260 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:56.206264 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:56.206268 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:56.206271 | orchestrator | 2026-03-29 00:28:56.206275 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-29 00:28:56.206279 | orchestrator | 2026-03-29 00:28:56.206283 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:28:56.206286 | orchestrator | Sunday 29 March 2026 00:28:47 +0000 (0:00:04.060) 0:00:04.352 ********** 2026-03-29 00:28:56.206291 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-29 00:28:56.206295 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-29 00:28:56.206301 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-29 00:28:56.206308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-29 00:28:56.206314 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-29 00:28:56.206320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:28:56.206327 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 00:28:56.206333 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-29 00:28:56.206339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:28:56.206354 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-29 00:28:56.206358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:28:56.206362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-29 00:28:56.206366 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-29 00:28:56.206369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:28:56.206373 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-29 00:28:56.206377 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:56.206381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:28:56.206385 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-29 00:28:56.206388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:28:56.206392 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:56.206396 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-29 00:28:56.206399 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-29 00:28:56.206403 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-29 00:28:56.206407 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-29 00:28:56.206410 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 00:28:56.206414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-29 00:28:56.206420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-29 00:28:56.206425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-29 00:28:56.206432 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 00:28:56.206438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-29 00:28:56.206444 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 00:28:56.206451 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-29 00:28:56.206458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 00:28:56.206464 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:56.206471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-29 00:28:56.206477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-29 00:28:56.206483 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 00:28:56.206487 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-29 00:28:56.206491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:28:56.206496 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-29 00:28:56.206502 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 00:28:56.206507 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:56.206511 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-29 00:28:56.206517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:28:56.206522 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-29 00:28:56.206526 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 00:28:56.206539 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:28:56.206543 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:28:56.206547 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-29 00:28:56.206550 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 00:28:56.206563 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 00:28:56.206567 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 00:28:56.206571 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 00:28:56.206575 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:28:56.206582 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 00:28:56.206586 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:28:56.206590 | orchestrator | 2026-03-29 00:28:56.206594 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-29 00:28:56.206598 | orchestrator | 2026-03-29 00:28:56.206601 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-29 00:28:56.206605 | orchestrator | Sunday 29 March 2026 00:28:48 +0000 (0:00:00.400) 0:00:04.753 ********** 2026-03-29 00:28:56.206609 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:56.206613 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:56.206616 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:56.206620 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:56.206624 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:56.206628 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:56.206631 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:56.206635 | orchestrator | 2026-03-29 00:28:56.206639 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-29 00:28:56.206643 | orchestrator | Sunday 29 March 2026 00:28:49 +0000 (0:00:01.430) 0:00:06.183 ********** 2026-03-29 00:28:56.206647 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:56.206650 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:28:56.206654 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:28:56.206658 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:28:56.206662 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:28:56.206667 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:28:56.206671 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:28:56.206675 | orchestrator | 2026-03-29 00:28:56.206680 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-29 00:28:56.206684 | orchestrator | Sunday 29 March 2026 00:28:50 +0000 (0:00:01.277) 0:00:07.460 ********** 2026-03-29 00:28:56.206691 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:28:56.206699 | orchestrator | 2026-03-29 00:28:56.206706 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-29 00:28:56.206713 | orchestrator | Sunday 29 March 2026 00:28:51 +0000 (0:00:00.287) 0:00:07.748 ********** 2026-03-29 00:28:56.206720 | orchestrator | changed: [testbed-manager] 2026-03-29 00:28:56.206725 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:56.206729 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:56.206733 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:56.206738 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:56.206743 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:56.206750 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:56.206757 | orchestrator | 2026-03-29 00:28:56.206764 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-29 00:28:56.206771 | orchestrator | Sunday 29 March 2026 00:28:53 +0000 (0:00:02.232) 0:00:09.981 ********** 2026-03-29 00:28:56.206778 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:56.206783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:28:56.206789 | orchestrator | 2026-03-29 00:28:56.206793 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-29 00:28:56.206797 | orchestrator | Sunday 29 March 2026 00:28:53 +0000 (0:00:00.276) 0:00:10.257 ********** 2026-03-29 00:28:56.206802 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:56.206820 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:56.206827 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:56.206832 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:56.206836 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:56.206840 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:56.206848 | orchestrator | 2026-03-29 00:28:56.206854 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-29 00:28:56.206859 | orchestrator | Sunday 29 March 2026 00:28:54 +0000 (0:00:01.162) 0:00:11.420 ********** 2026-03-29 00:28:56.206863 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:56.206868 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:28:56.206875 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:28:56.206882 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:28:56.206889 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:28:56.206895 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:28:56.206902 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:28:56.206909 | orchestrator | 2026-03-29 00:28:56.206915 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-29 00:28:56.206919 | orchestrator | Sunday 29 March 2026 00:28:55 +0000 (0:00:00.800) 0:00:12.220 ********** 2026-03-29 00:28:56.206924 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:56.206928 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:28:56.206932 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:28:56.206936 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:28:56.206940 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:28:56.206944 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:28:56.206948 | orchestrator | ok: [testbed-manager] 2026-03-29 00:28:56.206953 | orchestrator | 2026-03-29 00:28:56.206957 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-29 00:28:56.206962 | orchestrator | Sunday 29 March 2026 00:28:56 +0000 (0:00:00.421) 0:00:12.642 ********** 2026-03-29 00:28:56.206966 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:28:56.206970 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:28:56.206978 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:29:08.815995 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:29:08.816113 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:29:08.816129 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:29:08.816141 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:29:08.816153 | orchestrator | 2026-03-29 00:29:08.816166 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-29 00:29:08.816179 | orchestrator | Sunday 29 March 2026 00:28:56 +0000 (0:00:00.260) 0:00:12.902 ********** 2026-03-29 00:29:08.816192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:08.816221 | orchestrator | 2026-03-29 00:29:08.816233 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-29 00:29:08.816245 | orchestrator | Sunday 29 March 2026 00:28:56 +0000 (0:00:00.285) 0:00:13.188 ********** 2026-03-29 00:29:08.816256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:08.816268 | orchestrator | 2026-03-29 00:29:08.816279 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-29 00:29:08.816290 | orchestrator | Sunday 29 March 2026 00:28:56 +0000 (0:00:00.297) 0:00:13.486 ********** 2026-03-29 00:29:08.816301 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.816313 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.816324 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.816338 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.816351 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.816364 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.816377 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.816390 | orchestrator | 2026-03-29 00:29:08.816403 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-29 00:29:08.816415 | orchestrator | Sunday 29 March 2026 00:28:58 +0000 (0:00:01.387) 0:00:14.873 ********** 2026-03-29 00:29:08.816453 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:29:08.816468 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:29:08.816480 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:29:08.816493 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:29:08.816506 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:29:08.816518 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:29:08.816530 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:29:08.816544 | orchestrator | 2026-03-29 00:29:08.816556 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-29 00:29:08.816569 | orchestrator | Sunday 29 March 2026 00:28:58 +0000 (0:00:00.334) 0:00:15.207 ********** 2026-03-29 00:29:08.816582 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.816595 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.816608 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.816620 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.816633 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.816645 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.816658 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.816671 | orchestrator | 2026-03-29 00:29:08.816683 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-29 00:29:08.816697 | orchestrator | Sunday 29 March 2026 00:28:59 +0000 (0:00:00.551) 0:00:15.759 ********** 2026-03-29 00:29:08.816710 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:29:08.816724 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:29:08.816737 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:29:08.816749 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:29:08.816759 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:29:08.816770 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:29:08.816782 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:29:08.816815 | orchestrator | 2026-03-29 00:29:08.816828 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-29 00:29:08.816840 | orchestrator | Sunday 29 March 2026 00:28:59 +0000 (0:00:00.310) 0:00:16.070 ********** 2026-03-29 00:29:08.816851 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.816862 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:29:08.816873 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:29:08.816884 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:29:08.816911 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:08.816923 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:08.816942 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:08.816953 | orchestrator | 2026-03-29 00:29:08.816964 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-29 00:29:08.816975 | orchestrator | Sunday 29 March 2026 00:29:00 +0000 (0:00:00.678) 0:00:16.748 ********** 2026-03-29 00:29:08.816986 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.816997 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:29:08.817008 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:08.817019 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:29:08.817030 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:29:08.817041 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:08.817052 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:08.817062 | orchestrator | 2026-03-29 00:29:08.817073 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-29 00:29:08.817084 | orchestrator | Sunday 29 March 2026 00:29:01 +0000 (0:00:01.170) 0:00:17.918 ********** 2026-03-29 00:29:08.817095 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.817106 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.817117 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.817128 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.817139 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.817150 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.817160 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.817171 | orchestrator | 2026-03-29 00:29:08.817182 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-29 00:29:08.817201 | orchestrator | Sunday 29 March 2026 00:29:02 +0000 (0:00:01.097) 0:00:19.016 ********** 2026-03-29 00:29:08.817232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:08.817245 | orchestrator | 2026-03-29 00:29:08.817256 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-29 00:29:08.817267 | orchestrator | Sunday 29 March 2026 00:29:02 +0000 (0:00:00.314) 0:00:19.330 ********** 2026-03-29 00:29:08.817277 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:29:08.817288 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:08.817299 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:29:08.817310 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:08.817321 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:29:08.817332 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:29:08.817343 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:08.817354 | orchestrator | 2026-03-29 00:29:08.817365 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-29 00:29:08.817375 | orchestrator | Sunday 29 March 2026 00:29:04 +0000 (0:00:01.315) 0:00:20.646 ********** 2026-03-29 00:29:08.817386 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.817397 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.817408 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.817419 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.817430 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.817440 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.817451 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.817462 | orchestrator | 2026-03-29 00:29:08.817472 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-29 00:29:08.817483 | orchestrator | Sunday 29 March 2026 00:29:04 +0000 (0:00:00.261) 0:00:20.907 ********** 2026-03-29 00:29:08.817494 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.817505 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.817516 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.817526 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.817537 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.817548 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.817558 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.817569 | orchestrator | 2026-03-29 00:29:08.817580 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-29 00:29:08.817591 | orchestrator | Sunday 29 March 2026 00:29:04 +0000 (0:00:00.229) 0:00:21.137 ********** 2026-03-29 00:29:08.817602 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.817613 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.817623 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.817634 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.817645 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.817656 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.817666 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.817677 | orchestrator | 2026-03-29 00:29:08.817688 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-29 00:29:08.817699 | orchestrator | Sunday 29 March 2026 00:29:04 +0000 (0:00:00.253) 0:00:21.391 ********** 2026-03-29 00:29:08.817711 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:08.817723 | orchestrator | 2026-03-29 00:29:08.817734 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-29 00:29:08.817745 | orchestrator | Sunday 29 March 2026 00:29:05 +0000 (0:00:00.336) 0:00:21.727 ********** 2026-03-29 00:29:08.817756 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.817767 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.817785 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.817818 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.817829 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.817840 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.817850 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.817861 | orchestrator | 2026-03-29 00:29:08.817872 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-29 00:29:08.817898 | orchestrator | Sunday 29 March 2026 00:29:05 +0000 (0:00:00.527) 0:00:22.255 ********** 2026-03-29 00:29:08.817909 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:29:08.817920 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:29:08.817931 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:29:08.817942 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:29:08.817953 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:29:08.817963 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:29:08.817974 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:29:08.817985 | orchestrator | 2026-03-29 00:29:08.817997 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-29 00:29:08.818008 | orchestrator | Sunday 29 March 2026 00:29:05 +0000 (0:00:00.230) 0:00:22.485 ********** 2026-03-29 00:29:08.818080 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.818092 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.818103 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.818113 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:08.818124 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.818135 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:08.818146 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:08.818156 | orchestrator | 2026-03-29 00:29:08.818167 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-29 00:29:08.818192 | orchestrator | Sunday 29 March 2026 00:29:07 +0000 (0:00:01.151) 0:00:23.637 ********** 2026-03-29 00:29:08.818203 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.818214 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.818225 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.818235 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.818246 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:08.818257 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:08.818276 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:08.818287 | orchestrator | 2026-03-29 00:29:08.818298 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-29 00:29:08.818309 | orchestrator | Sunday 29 March 2026 00:29:07 +0000 (0:00:00.573) 0:00:24.211 ********** 2026-03-29 00:29:08.818320 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:08.818331 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:08.818342 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:08.818353 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:08.818372 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:52.646334 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:52.646417 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:52.646423 | orchestrator | 2026-03-29 00:29:52.646428 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-29 00:29:52.646434 | orchestrator | Sunday 29 March 2026 00:29:08 +0000 (0:00:01.191) 0:00:25.403 ********** 2026-03-29 00:29:52.646438 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.646443 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.646447 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.646451 | orchestrator | changed: [testbed-manager] 2026-03-29 00:29:52.646455 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:52.646459 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:52.646463 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:52.646467 | orchestrator | 2026-03-29 00:29:52.646471 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-29 00:29:52.646475 | orchestrator | Sunday 29 March 2026 00:29:27 +0000 (0:00:18.503) 0:00:43.906 ********** 2026-03-29 00:29:52.646479 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.646496 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.646500 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.646504 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.646507 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.646511 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.646515 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.646518 | orchestrator | 2026-03-29 00:29:52.646522 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-29 00:29:52.646526 | orchestrator | Sunday 29 March 2026 00:29:27 +0000 (0:00:00.236) 0:00:44.143 ********** 2026-03-29 00:29:52.646530 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.646534 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.646537 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.646541 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.646545 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.646548 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.646552 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.646556 | orchestrator | 2026-03-29 00:29:52.646559 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-29 00:29:52.646563 | orchestrator | Sunday 29 March 2026 00:29:27 +0000 (0:00:00.232) 0:00:44.376 ********** 2026-03-29 00:29:52.646567 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.646570 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.646574 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.646589 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.646593 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.646603 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.646607 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.646611 | orchestrator | 2026-03-29 00:29:52.646615 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-29 00:29:52.646619 | orchestrator | Sunday 29 March 2026 00:29:27 +0000 (0:00:00.227) 0:00:44.604 ********** 2026-03-29 00:29:52.646625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:52.646630 | orchestrator | 2026-03-29 00:29:52.646634 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-29 00:29:52.646638 | orchestrator | Sunday 29 March 2026 00:29:28 +0000 (0:00:00.320) 0:00:44.925 ********** 2026-03-29 00:29:52.646642 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.646646 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.646650 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.646653 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.646657 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.646661 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.646665 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.646668 | orchestrator | 2026-03-29 00:29:52.646672 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-29 00:29:52.646676 | orchestrator | Sunday 29 March 2026 00:29:30 +0000 (0:00:02.227) 0:00:47.152 ********** 2026-03-29 00:29:52.646680 | orchestrator | changed: [testbed-manager] 2026-03-29 00:29:52.646683 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:29:52.646687 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:52.646691 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:29:52.646695 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:52.646698 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:52.646702 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:29:52.646706 | orchestrator | 2026-03-29 00:29:52.646709 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-29 00:29:52.646724 | orchestrator | Sunday 29 March 2026 00:29:31 +0000 (0:00:01.083) 0:00:48.235 ********** 2026-03-29 00:29:52.646728 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.646732 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.646736 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.646777 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.646782 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.646786 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.646789 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.646793 | orchestrator | 2026-03-29 00:29:52.646797 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-29 00:29:52.646801 | orchestrator | Sunday 29 March 2026 00:29:32 +0000 (0:00:00.901) 0:00:49.137 ********** 2026-03-29 00:29:52.646808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:52.646816 | orchestrator | 2026-03-29 00:29:52.646822 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-29 00:29:52.646829 | orchestrator | Sunday 29 March 2026 00:29:32 +0000 (0:00:00.308) 0:00:49.446 ********** 2026-03-29 00:29:52.646835 | orchestrator | changed: [testbed-manager] 2026-03-29 00:29:52.646843 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:29:52.646851 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:29:52.646858 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:29:52.646863 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:52.646869 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:52.646874 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:52.646880 | orchestrator | 2026-03-29 00:29:52.646902 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-29 00:29:52.646910 | orchestrator | Sunday 29 March 2026 00:29:33 +0000 (0:00:01.060) 0:00:50.506 ********** 2026-03-29 00:29:52.646917 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:29:52.646923 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:29:52.646930 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:29:52.646936 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:29:52.646942 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:29:52.646948 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:29:52.646955 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:29:52.646961 | orchestrator | 2026-03-29 00:29:52.646968 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-29 00:29:52.646974 | orchestrator | Sunday 29 March 2026 00:29:34 +0000 (0:00:00.261) 0:00:50.767 ********** 2026-03-29 00:29:52.646981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:52.646988 | orchestrator | 2026-03-29 00:29:52.646994 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-29 00:29:52.647000 | orchestrator | Sunday 29 March 2026 00:29:34 +0000 (0:00:00.316) 0:00:51.084 ********** 2026-03-29 00:29:52.647006 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.647012 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.647019 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.647025 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.647086 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.647109 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.647116 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.647121 | orchestrator | 2026-03-29 00:29:52.647127 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-29 00:29:52.647133 | orchestrator | Sunday 29 March 2026 00:29:36 +0000 (0:00:02.431) 0:00:53.515 ********** 2026-03-29 00:29:52.647139 | orchestrator | changed: [testbed-manager] 2026-03-29 00:29:52.647146 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:29:52.647152 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:52.647158 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:29:52.647164 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:29:52.647170 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:52.647176 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:52.647189 | orchestrator | 2026-03-29 00:29:52.647195 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-29 00:29:52.647201 | orchestrator | Sunday 29 March 2026 00:29:38 +0000 (0:00:01.208) 0:00:54.723 ********** 2026-03-29 00:29:52.647207 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:29:52.647213 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:29:52.647220 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:29:52.647226 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:29:52.647232 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:29:52.647252 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:29:52.647259 | orchestrator | changed: [testbed-manager] 2026-03-29 00:29:52.647265 | orchestrator | 2026-03-29 00:29:52.647271 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-29 00:29:52.647277 | orchestrator | Sunday 29 March 2026 00:29:49 +0000 (0:00:11.411) 0:01:06.135 ********** 2026-03-29 00:29:52.647298 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.647305 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.647312 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.647319 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.647326 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.647333 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.647339 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.647345 | orchestrator | 2026-03-29 00:29:52.647352 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-29 00:29:52.647358 | orchestrator | Sunday 29 March 2026 00:29:50 +0000 (0:00:01.122) 0:01:07.257 ********** 2026-03-29 00:29:52.647367 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.647373 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.647380 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.647387 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.647394 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.647400 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.647406 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.647413 | orchestrator | 2026-03-29 00:29:52.647419 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-29 00:29:52.647426 | orchestrator | Sunday 29 March 2026 00:29:51 +0000 (0:00:01.303) 0:01:08.560 ********** 2026-03-29 00:29:52.647439 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.647446 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.647453 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.647459 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.647466 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.647472 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.647479 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.647485 | orchestrator | 2026-03-29 00:29:52.647492 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-29 00:29:52.647499 | orchestrator | Sunday 29 March 2026 00:29:52 +0000 (0:00:00.206) 0:01:08.767 ********** 2026-03-29 00:29:52.647506 | orchestrator | ok: [testbed-manager] 2026-03-29 00:29:52.647513 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:29:52.647519 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:29:52.647526 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:29:52.647532 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:29:52.647539 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:29:52.647545 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:29:52.647552 | orchestrator | 2026-03-29 00:29:52.647558 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-29 00:29:52.647565 | orchestrator | Sunday 29 March 2026 00:29:52 +0000 (0:00:00.208) 0:01:08.975 ********** 2026-03-29 00:29:52.647572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:29:52.647579 | orchestrator | 2026-03-29 00:29:52.647591 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-29 00:32:14.976756 | orchestrator | Sunday 29 March 2026 00:29:52 +0000 (0:00:00.266) 0:01:09.242 ********** 2026-03-29 00:32:14.976875 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:14.976895 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:14.976907 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:14.976918 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:14.976929 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:14.976941 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:14.976953 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:14.976965 | orchestrator | 2026-03-29 00:32:14.976978 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-29 00:32:14.976992 | orchestrator | Sunday 29 March 2026 00:29:54 +0000 (0:00:02.345) 0:01:11.587 ********** 2026-03-29 00:32:14.977005 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:14.977017 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:14.977025 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:14.977031 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:14.977038 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:14.977045 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:14.977052 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:14.977059 | orchestrator | 2026-03-29 00:32:14.977066 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-29 00:32:14.977074 | orchestrator | Sunday 29 March 2026 00:29:55 +0000 (0:00:00.627) 0:01:12.214 ********** 2026-03-29 00:32:14.977081 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:14.977088 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:14.977094 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:14.977101 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:14.977108 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:14.977114 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:14.977121 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:14.977127 | orchestrator | 2026-03-29 00:32:14.977135 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-29 00:32:14.977142 | orchestrator | Sunday 29 March 2026 00:29:55 +0000 (0:00:00.224) 0:01:12.439 ********** 2026-03-29 00:32:14.977149 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:14.977155 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:14.977162 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:14.977169 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:14.977175 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:14.977182 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:14.977188 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:14.977195 | orchestrator | 2026-03-29 00:32:14.977201 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-29 00:32:14.977208 | orchestrator | Sunday 29 March 2026 00:29:57 +0000 (0:00:01.293) 0:01:13.732 ********** 2026-03-29 00:32:14.977215 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:14.977221 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:14.977229 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:14.977240 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:14.977250 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:14.977261 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:14.977271 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:14.977282 | orchestrator | 2026-03-29 00:32:14.977297 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-29 00:32:14.977309 | orchestrator | Sunday 29 March 2026 00:29:58 +0000 (0:00:01.624) 0:01:15.357 ********** 2026-03-29 00:32:14.977319 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:14.977330 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:14.977341 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:14.977354 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:14.977367 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:14.977378 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:14.977390 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:14.977398 | orchestrator | 2026-03-29 00:32:14.977406 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-29 00:32:14.977436 | orchestrator | Sunday 29 March 2026 00:30:02 +0000 (0:00:03.308) 0:01:18.666 ********** 2026-03-29 00:32:14.977444 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:14.977452 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:14.977460 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:14.977467 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:14.977475 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:14.977483 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:14.977490 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:14.977498 | orchestrator | 2026-03-29 00:32:14.977506 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-29 00:32:14.977514 | orchestrator | Sunday 29 March 2026 00:30:39 +0000 (0:00:37.479) 0:01:56.145 ********** 2026-03-29 00:32:14.977521 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:14.977528 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:14.977534 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:14.977541 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:14.977548 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:14.977554 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:14.977561 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:14.977568 | orchestrator | 2026-03-29 00:32:14.977628 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-29 00:32:14.977635 | orchestrator | Sunday 29 March 2026 00:31:58 +0000 (0:01:19.218) 0:03:15.363 ********** 2026-03-29 00:32:14.977642 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:14.977648 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:14.977655 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:14.977661 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:14.977668 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:14.977675 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:14.977681 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:14.977688 | orchestrator | 2026-03-29 00:32:14.977694 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-29 00:32:14.977701 | orchestrator | Sunday 29 March 2026 00:32:00 +0000 (0:00:02.047) 0:03:17.411 ********** 2026-03-29 00:32:14.977708 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:14.977714 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:14.977721 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:14.977728 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:14.977734 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:14.977741 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:14.977747 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:14.977754 | orchestrator | 2026-03-29 00:32:14.977760 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-29 00:32:14.977767 | orchestrator | Sunday 29 March 2026 00:32:12 +0000 (0:00:11.966) 0:03:29.377 ********** 2026-03-29 00:32:14.977805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-29 00:32:14.977826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-29 00:32:14.977845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-29 00:32:14.977853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-29 00:32:14.977860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-29 00:32:14.977867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-29 00:32:14.977874 | orchestrator | 2026-03-29 00:32:14.977881 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-29 00:32:14.977888 | orchestrator | Sunday 29 March 2026 00:32:13 +0000 (0:00:00.392) 0:03:29.769 ********** 2026-03-29 00:32:14.977894 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:32:14.977901 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:32:14.977908 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:14.977915 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:32:14.977921 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:14.977931 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-29 00:32:14.977938 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:14.977945 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:14.977952 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:32:14.977958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:32:14.977965 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:32:14.977972 | orchestrator | 2026-03-29 00:32:14.977978 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-29 00:32:14.977985 | orchestrator | Sunday 29 March 2026 00:32:14 +0000 (0:00:01.740) 0:03:31.509 ********** 2026-03-29 00:32:14.977992 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:32:14.978000 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:32:14.978006 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:32:14.978013 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:32:14.978072 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:32:14.978086 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:32:23.060284 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:32:23.060357 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:32:23.060375 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:32:23.060379 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:32:23.060383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:32:23.060387 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:32:23.060392 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:32:23.060395 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:32:23.060399 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:32:23.060403 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:32:23.060407 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:32:23.060411 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:32:23.060415 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:32:23.060419 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:32:23.060423 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:32:23.060426 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:32:23.060430 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:32:23.060434 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:32:23.060438 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:23.060443 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:32:23.060447 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:32:23.060451 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:32:23.060454 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:32:23.060458 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:32:23.060462 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-29 00:32:23.060466 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:32:23.060469 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-29 00:32:23.060473 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-29 00:32:23.060477 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-29 00:32:23.060480 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-29 00:32:23.060488 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:23.060492 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-29 00:32:23.060496 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-29 00:32:23.060499 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-29 00:32:23.060503 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-29 00:32:23.060510 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-29 00:32:23.060514 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:23.060517 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:23.060521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 00:32:23.060525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 00:32:23.060529 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 00:32:23.060532 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 00:32:23.060536 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 00:32:23.060547 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 00:32:23.060552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 00:32:23.060606 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-29 00:32:23.060611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 00:32:23.060614 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 00:32:23.060618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-29 00:32:23.060622 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 00:32:23.060626 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-29 00:32:23.060629 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 00:32:23.060633 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-29 00:32:23.060637 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 00:32:23.060641 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 00:32:23.060644 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-29 00:32:23.060648 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 00:32:23.060652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 00:32:23.060655 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-29 00:32:23.060659 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 00:32:23.060663 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 00:32:23.060666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-29 00:32:23.060670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 00:32:23.060674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-29 00:32:23.060678 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 00:32:23.060681 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-29 00:32:23.060685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 00:32:23.060689 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-29 00:32:23.060696 | orchestrator | 2026-03-29 00:32:23.060700 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-29 00:32:23.060704 | orchestrator | Sunday 29 March 2026 00:32:20 +0000 (0:00:06.058) 0:03:37.568 ********** 2026-03-29 00:32:23.060708 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:32:23.060712 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:32:23.060715 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:32:23.060719 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:32:23.060725 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:32:23.060729 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:32:23.060733 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-29 00:32:23.060737 | orchestrator | 2026-03-29 00:32:23.060741 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-29 00:32:23.060744 | orchestrator | Sunday 29 March 2026 00:32:22 +0000 (0:00:01.538) 0:03:39.106 ********** 2026-03-29 00:32:23.060748 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:23.060752 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:23.060756 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:23.060759 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:23.060763 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:32:23.060767 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:32:23.060770 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:23.060774 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:32:23.060778 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:32:23.060782 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:32:23.060788 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:32:37.952990 | orchestrator | 2026-03-29 00:32:37.953076 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-29 00:32:37.953086 | orchestrator | Sunday 29 March 2026 00:32:23 +0000 (0:00:00.546) 0:03:39.653 ********** 2026-03-29 00:32:37.953093 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:37.953100 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:37.953108 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:37.953115 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:37.953121 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:37.953128 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-29 00:32:37.953134 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:37.953140 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:37.953146 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:32:37.953153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:32:37.953159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-29 00:32:37.953165 | orchestrator | 2026-03-29 00:32:37.953171 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-29 00:32:37.953211 | orchestrator | Sunday 29 March 2026 00:32:24 +0000 (0:00:01.581) 0:03:41.235 ********** 2026-03-29 00:32:37.953218 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:32:37.953232 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:37.953239 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:32:37.953245 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:32:37.953251 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:32:37.953257 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:32:37.953263 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-29 00:32:37.953270 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:32:37.953276 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 00:32:37.953282 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 00:32:37.953289 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-29 00:32:37.953295 | orchestrator | 2026-03-29 00:32:37.953301 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-29 00:32:37.953307 | orchestrator | Sunday 29 March 2026 00:32:26 +0000 (0:00:01.540) 0:03:42.775 ********** 2026-03-29 00:32:37.953314 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:37.953320 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:37.953326 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:37.953332 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:37.953338 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:32:37.953344 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:32:37.953350 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:32:37.953357 | orchestrator | 2026-03-29 00:32:37.953363 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-29 00:32:37.953369 | orchestrator | Sunday 29 March 2026 00:32:26 +0000 (0:00:00.290) 0:03:43.065 ********** 2026-03-29 00:32:37.953375 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:37.953382 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:37.953388 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:37.953395 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:37.953401 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:37.953409 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:37.953419 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:37.953429 | orchestrator | 2026-03-29 00:32:37.953438 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-29 00:32:37.953448 | orchestrator | Sunday 29 March 2026 00:32:31 +0000 (0:00:05.317) 0:03:48.383 ********** 2026-03-29 00:32:37.953458 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-29 00:32:37.953468 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-29 00:32:37.953478 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:37.953488 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:37.953498 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-29 00:32:37.953507 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-29 00:32:37.953517 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:37.953526 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-29 00:32:37.953583 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:37.953591 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-29 00:32:37.953612 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:32:37.953619 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:32:37.953626 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-29 00:32:37.953633 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:32:37.953640 | orchestrator | 2026-03-29 00:32:37.953655 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-29 00:32:37.953662 | orchestrator | Sunday 29 March 2026 00:32:32 +0000 (0:00:00.294) 0:03:48.677 ********** 2026-03-29 00:32:37.953669 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-29 00:32:37.953677 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-29 00:32:37.953684 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-29 00:32:37.953705 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-29 00:32:37.953712 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-29 00:32:37.953720 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-29 00:32:37.953727 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-29 00:32:37.953735 | orchestrator | 2026-03-29 00:32:37.953742 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-29 00:32:37.953750 | orchestrator | Sunday 29 March 2026 00:32:33 +0000 (0:00:01.267) 0:03:49.944 ********** 2026-03-29 00:32:37.953758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:32:37.953767 | orchestrator | 2026-03-29 00:32:37.953775 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-29 00:32:37.953782 | orchestrator | Sunday 29 March 2026 00:32:33 +0000 (0:00:00.399) 0:03:50.344 ********** 2026-03-29 00:32:37.953790 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:37.953797 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:37.953804 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:37.953811 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:37.953817 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:37.953824 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:37.953831 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:37.953838 | orchestrator | 2026-03-29 00:32:37.953845 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-29 00:32:37.953852 | orchestrator | Sunday 29 March 2026 00:32:35 +0000 (0:00:01.322) 0:03:51.667 ********** 2026-03-29 00:32:37.953859 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:37.953867 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:37.953873 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:37.953880 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:37.953887 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:37.953894 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:37.953901 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:37.953908 | orchestrator | 2026-03-29 00:32:37.953915 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-29 00:32:37.953921 | orchestrator | Sunday 29 March 2026 00:32:35 +0000 (0:00:00.640) 0:03:52.307 ********** 2026-03-29 00:32:37.953928 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:37.953934 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:37.953940 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:37.953946 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:37.953952 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:37.953958 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:37.953964 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:37.953970 | orchestrator | 2026-03-29 00:32:37.953976 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-29 00:32:37.953982 | orchestrator | Sunday 29 March 2026 00:32:36 +0000 (0:00:00.595) 0:03:52.903 ********** 2026-03-29 00:32:37.953988 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:37.953995 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:37.954001 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:37.954007 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:37.954059 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:37.954068 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:37.954075 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:37.954081 | orchestrator | 2026-03-29 00:32:37.954087 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-29 00:32:37.954098 | orchestrator | Sunday 29 March 2026 00:32:36 +0000 (0:00:00.623) 0:03:53.527 ********** 2026-03-29 00:32:37.954112 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742749.254067, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:37.954121 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742714.9956264, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:37.954128 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742737.1579607, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:37.954152 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742768.0580418, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700516 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742767.6288629, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700618 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742743.1462038, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700625 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774742755.964104, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700644 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700658 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700662 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700666 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700684 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700689 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700693 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 00:32:42.700700 | orchestrator | 2026-03-29 00:32:42.700705 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-29 00:32:42.700710 | orchestrator | Sunday 29 March 2026 00:32:37 +0000 (0:00:01.020) 0:03:54.547 ********** 2026-03-29 00:32:42.700714 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:42.700719 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:42.700723 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:42.700726 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:42.700730 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:42.700734 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:42.700738 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:42.700742 | orchestrator | 2026-03-29 00:32:42.700746 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-29 00:32:42.700750 | orchestrator | Sunday 29 March 2026 00:32:39 +0000 (0:00:01.145) 0:03:55.692 ********** 2026-03-29 00:32:42.700753 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:42.700757 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:42.700761 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:42.700765 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:42.700768 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:42.700772 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:42.700776 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:42.700780 | orchestrator | 2026-03-29 00:32:42.700786 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-29 00:32:42.700790 | orchestrator | Sunday 29 March 2026 00:32:40 +0000 (0:00:01.144) 0:03:56.837 ********** 2026-03-29 00:32:42.700794 | orchestrator | changed: [testbed-manager] 2026-03-29 00:32:42.700797 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:32:42.700801 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:32:42.700805 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:32:42.700808 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:32:42.700812 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:32:42.700816 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:32:42.700820 | orchestrator | 2026-03-29 00:32:42.700823 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-29 00:32:42.700827 | orchestrator | Sunday 29 March 2026 00:32:41 +0000 (0:00:01.100) 0:03:57.937 ********** 2026-03-29 00:32:42.700831 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:32:42.700835 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:32:42.700838 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:32:42.700842 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:32:42.700846 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:32:42.700849 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:32:42.700853 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:32:42.700857 | orchestrator | 2026-03-29 00:32:42.700861 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-29 00:32:42.700864 | orchestrator | Sunday 29 March 2026 00:32:41 +0000 (0:00:00.247) 0:03:58.185 ********** 2026-03-29 00:32:42.700868 | orchestrator | ok: [testbed-manager] 2026-03-29 00:32:42.700873 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:32:42.700877 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:32:42.700880 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:32:42.700884 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:32:42.700888 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:32:42.700892 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:32:42.700895 | orchestrator | 2026-03-29 00:32:42.700899 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-29 00:32:42.700903 | orchestrator | Sunday 29 March 2026 00:32:42 +0000 (0:00:00.708) 0:03:58.893 ********** 2026-03-29 00:32:42.700908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:32:42.700921 | orchestrator | 2026-03-29 00:32:42.700925 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-29 00:32:42.700931 | orchestrator | Sunday 29 March 2026 00:32:42 +0000 (0:00:00.406) 0:03:59.300 ********** 2026-03-29 00:34:05.252914 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.253015 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.253030 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.253043 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.253054 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.253064 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.253075 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.253087 | orchestrator | 2026-03-29 00:34:05.253098 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-29 00:34:05.253110 | orchestrator | Sunday 29 March 2026 00:32:52 +0000 (0:00:09.809) 0:04:09.109 ********** 2026-03-29 00:34:05.253121 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.253133 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:05.253143 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:05.253154 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:05.253165 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:05.253176 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:05.253187 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:05.253198 | orchestrator | 2026-03-29 00:34:05.253209 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-29 00:34:05.253220 | orchestrator | Sunday 29 March 2026 00:32:54 +0000 (0:00:01.754) 0:04:10.864 ********** 2026-03-29 00:34:05.253231 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.253242 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:05.253253 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:05.253263 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:05.253274 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:05.253285 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:05.253296 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:05.253306 | orchestrator | 2026-03-29 00:34:05.253317 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-29 00:34:05.253329 | orchestrator | Sunday 29 March 2026 00:32:55 +0000 (0:00:01.043) 0:04:11.907 ********** 2026-03-29 00:34:05.253340 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.253350 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:05.253361 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:05.253372 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:05.253384 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:05.253438 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:05.253449 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:05.253460 | orchestrator | 2026-03-29 00:34:05.253473 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-29 00:34:05.253488 | orchestrator | Sunday 29 March 2026 00:32:55 +0000 (0:00:00.273) 0:04:12.181 ********** 2026-03-29 00:34:05.253501 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.253514 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:05.253526 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:05.253538 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:05.253551 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:05.253564 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:05.253576 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:05.253588 | orchestrator | 2026-03-29 00:34:05.253601 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-29 00:34:05.253614 | orchestrator | Sunday 29 March 2026 00:32:55 +0000 (0:00:00.346) 0:04:12.528 ********** 2026-03-29 00:34:05.253627 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.253639 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:05.253651 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:05.253685 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:05.253698 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:05.253710 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:05.253722 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:05.253734 | orchestrator | 2026-03-29 00:34:05.253747 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-29 00:34:05.253761 | orchestrator | Sunday 29 March 2026 00:32:56 +0000 (0:00:00.297) 0:04:12.825 ********** 2026-03-29 00:34:05.253774 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.253785 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:05.253796 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:05.253807 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:05.253818 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:05.253828 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:05.253839 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:05.253850 | orchestrator | 2026-03-29 00:34:05.253861 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-29 00:34:05.253872 | orchestrator | Sunday 29 March 2026 00:33:01 +0000 (0:00:05.303) 0:04:18.129 ********** 2026-03-29 00:34:05.253884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:34:05.253898 | orchestrator | 2026-03-29 00:34:05.253909 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-29 00:34:05.253920 | orchestrator | Sunday 29 March 2026 00:33:01 +0000 (0:00:00.429) 0:04:18.558 ********** 2026-03-29 00:34:05.253931 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-29 00:34:05.253942 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-29 00:34:05.253953 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-29 00:34:05.253963 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:05.253975 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-29 00:34:05.254001 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-29 00:34:05.254013 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-29 00:34:05.254080 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:05.254092 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-29 00:34:05.254103 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-29 00:34:05.254113 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:05.254124 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-29 00:34:05.254135 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:05.254146 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-29 00:34:05.254157 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-29 00:34:05.254168 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:05.254196 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-29 00:34:05.254208 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:05.254219 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-29 00:34:05.254229 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-29 00:34:05.254240 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:05.254251 | orchestrator | 2026-03-29 00:34:05.254262 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-29 00:34:05.254273 | orchestrator | Sunday 29 March 2026 00:33:02 +0000 (0:00:00.323) 0:04:18.881 ********** 2026-03-29 00:34:05.254284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:34:05.254296 | orchestrator | 2026-03-29 00:34:05.254306 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-29 00:34:05.254325 | orchestrator | Sunday 29 March 2026 00:33:02 +0000 (0:00:00.381) 0:04:19.263 ********** 2026-03-29 00:34:05.254337 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-29 00:34:05.254347 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-29 00:34:05.254358 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:05.254369 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-29 00:34:05.254380 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:05.254433 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-29 00:34:05.254444 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:05.254455 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-29 00:34:05.254466 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:05.254476 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-29 00:34:05.254487 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:05.254498 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:05.254509 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-29 00:34:05.254519 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:05.254530 | orchestrator | 2026-03-29 00:34:05.254541 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-29 00:34:05.254552 | orchestrator | Sunday 29 March 2026 00:33:02 +0000 (0:00:00.302) 0:04:19.566 ********** 2026-03-29 00:34:05.254563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:34:05.254575 | orchestrator | 2026-03-29 00:34:05.254586 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-29 00:34:05.254596 | orchestrator | Sunday 29 March 2026 00:33:03 +0000 (0:00:00.387) 0:04:19.954 ********** 2026-03-29 00:34:05.254607 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:05.254618 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.254629 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.254640 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.254657 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.254668 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.254679 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.254690 | orchestrator | 2026-03-29 00:34:05.254701 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-29 00:34:05.254712 | orchestrator | Sunday 29 March 2026 00:33:38 +0000 (0:00:35.020) 0:04:54.974 ********** 2026-03-29 00:34:05.254722 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:05.254733 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.254744 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.254755 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.254766 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.254776 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.254787 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.254798 | orchestrator | 2026-03-29 00:34:05.254809 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-29 00:34:05.254820 | orchestrator | Sunday 29 March 2026 00:33:47 +0000 (0:00:08.890) 0:05:03.865 ********** 2026-03-29 00:34:05.254830 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:05.254841 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.254852 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.254863 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.254873 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.254884 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.254901 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.254921 | orchestrator | 2026-03-29 00:34:05.254941 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-29 00:34:05.254970 | orchestrator | Sunday 29 March 2026 00:33:56 +0000 (0:00:09.544) 0:05:13.409 ********** 2026-03-29 00:34:05.254989 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:05.255010 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:05.255030 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:05.255049 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:05.255069 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:05.255089 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:05.255103 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:05.255113 | orchestrator | 2026-03-29 00:34:05.255124 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-29 00:34:05.255135 | orchestrator | Sunday 29 March 2026 00:33:58 +0000 (0:00:02.177) 0:05:15.587 ********** 2026-03-29 00:34:05.255146 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:05.255157 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:05.255167 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:05.255178 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:05.255189 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:05.255199 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:05.255210 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:05.255221 | orchestrator | 2026-03-29 00:34:05.255240 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-29 00:34:16.675551 | orchestrator | Sunday 29 March 2026 00:34:05 +0000 (0:00:06.260) 0:05:21.847 ********** 2026-03-29 00:34:16.675664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:34:16.675681 | orchestrator | 2026-03-29 00:34:16.675693 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-29 00:34:16.675704 | orchestrator | Sunday 29 March 2026 00:34:05 +0000 (0:00:00.417) 0:05:22.265 ********** 2026-03-29 00:34:16.675716 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:16.675728 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:16.675738 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:16.675749 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:16.675759 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:16.675770 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:16.675780 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:16.675791 | orchestrator | 2026-03-29 00:34:16.675802 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-29 00:34:16.675813 | orchestrator | Sunday 29 March 2026 00:34:06 +0000 (0:00:00.739) 0:05:23.004 ********** 2026-03-29 00:34:16.675823 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:16.675835 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:16.675846 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:16.675856 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:16.675867 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:16.675877 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:16.675888 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:16.675898 | orchestrator | 2026-03-29 00:34:16.675909 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-29 00:34:16.675920 | orchestrator | Sunday 29 March 2026 00:34:08 +0000 (0:00:01.983) 0:05:24.988 ********** 2026-03-29 00:34:16.675930 | orchestrator | changed: [testbed-manager] 2026-03-29 00:34:16.675941 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:34:16.675952 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:34:16.675962 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:34:16.675973 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:34:16.675984 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:34:16.675997 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:34:16.676009 | orchestrator | 2026-03-29 00:34:16.676022 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-29 00:34:16.676034 | orchestrator | Sunday 29 March 2026 00:34:09 +0000 (0:00:00.723) 0:05:25.711 ********** 2026-03-29 00:34:16.676073 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:16.676093 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:16.676110 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:16.676126 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:16.676145 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:16.676165 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:16.676182 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:16.676199 | orchestrator | 2026-03-29 00:34:16.676212 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-29 00:34:16.676224 | orchestrator | Sunday 29 March 2026 00:34:09 +0000 (0:00:00.276) 0:05:25.988 ********** 2026-03-29 00:34:16.676237 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:16.676249 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:16.676261 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:16.676297 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:16.676317 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:16.676334 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:16.676351 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:16.676412 | orchestrator | 2026-03-29 00:34:16.676433 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-29 00:34:16.676453 | orchestrator | Sunday 29 March 2026 00:34:09 +0000 (0:00:00.395) 0:05:26.383 ********** 2026-03-29 00:34:16.676472 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:16.676489 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:16.676508 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:16.676520 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:16.676531 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:16.676541 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:16.676552 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:16.676562 | orchestrator | 2026-03-29 00:34:16.676573 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-29 00:34:16.676583 | orchestrator | Sunday 29 March 2026 00:34:10 +0000 (0:00:00.330) 0:05:26.713 ********** 2026-03-29 00:34:16.676594 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:16.676604 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:16.676615 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:16.676625 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:16.676635 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:16.676646 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:16.676656 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:16.676666 | orchestrator | 2026-03-29 00:34:16.676677 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-29 00:34:16.676689 | orchestrator | Sunday 29 March 2026 00:34:10 +0000 (0:00:00.265) 0:05:26.978 ********** 2026-03-29 00:34:16.676699 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:16.676709 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:16.676720 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:16.676730 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:16.676740 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:16.676751 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:16.676761 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:16.676772 | orchestrator | 2026-03-29 00:34:16.676783 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-29 00:34:16.676793 | orchestrator | Sunday 29 March 2026 00:34:10 +0000 (0:00:00.317) 0:05:27.295 ********** 2026-03-29 00:34:16.676804 | orchestrator | ok: [testbed-manager] =>  2026-03-29 00:34:16.676814 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:34:16.676824 | orchestrator | ok: [testbed-node-3] =>  2026-03-29 00:34:16.676835 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:34:16.676845 | orchestrator | ok: [testbed-node-4] =>  2026-03-29 00:34:16.676855 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:34:16.676866 | orchestrator | ok: [testbed-node-5] =>  2026-03-29 00:34:16.676876 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:34:16.676905 | orchestrator | ok: [testbed-node-0] =>  2026-03-29 00:34:16.676928 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:34:16.676939 | orchestrator | ok: [testbed-node-1] =>  2026-03-29 00:34:16.676949 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:34:16.676959 | orchestrator | ok: [testbed-node-2] =>  2026-03-29 00:34:16.676970 | orchestrator |  docker_version: 5:27.5.1 2026-03-29 00:34:16.676980 | orchestrator | 2026-03-29 00:34:16.676991 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-29 00:34:16.677001 | orchestrator | Sunday 29 March 2026 00:34:10 +0000 (0:00:00.258) 0:05:27.554 ********** 2026-03-29 00:34:16.677012 | orchestrator | ok: [testbed-manager] =>  2026-03-29 00:34:16.677022 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:34:16.677033 | orchestrator | ok: [testbed-node-3] =>  2026-03-29 00:34:16.677043 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:34:16.677053 | orchestrator | ok: [testbed-node-4] =>  2026-03-29 00:34:16.677064 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:34:16.677074 | orchestrator | ok: [testbed-node-5] =>  2026-03-29 00:34:16.677085 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:34:16.677095 | orchestrator | ok: [testbed-node-0] =>  2026-03-29 00:34:16.677105 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:34:16.677115 | orchestrator | ok: [testbed-node-1] =>  2026-03-29 00:34:16.677126 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:34:16.677136 | orchestrator | ok: [testbed-node-2] =>  2026-03-29 00:34:16.677147 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-29 00:34:16.677158 | orchestrator | 2026-03-29 00:34:16.677168 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-29 00:34:16.677179 | orchestrator | Sunday 29 March 2026 00:34:11 +0000 (0:00:00.295) 0:05:27.850 ********** 2026-03-29 00:34:16.677189 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:16.677200 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:16.677210 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:16.677221 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:16.677231 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:16.677241 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:16.677252 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:16.677262 | orchestrator | 2026-03-29 00:34:16.677273 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-29 00:34:16.677283 | orchestrator | Sunday 29 March 2026 00:34:11 +0000 (0:00:00.253) 0:05:28.104 ********** 2026-03-29 00:34:16.677294 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:16.677304 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:16.677314 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:16.677325 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:34:16.677335 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:34:16.677346 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:34:16.677356 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:34:16.677392 | orchestrator | 2026-03-29 00:34:16.677411 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-29 00:34:16.677423 | orchestrator | Sunday 29 March 2026 00:34:11 +0000 (0:00:00.267) 0:05:28.371 ********** 2026-03-29 00:34:16.677435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:34:16.677448 | orchestrator | 2026-03-29 00:34:16.677466 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-29 00:34:16.677477 | orchestrator | Sunday 29 March 2026 00:34:12 +0000 (0:00:00.469) 0:05:28.841 ********** 2026-03-29 00:34:16.677488 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:16.677498 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:16.677509 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:16.677519 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:16.677530 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:16.677547 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:16.677558 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:16.677569 | orchestrator | 2026-03-29 00:34:16.677579 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-29 00:34:16.677590 | orchestrator | Sunday 29 March 2026 00:34:13 +0000 (0:00:00.987) 0:05:29.829 ********** 2026-03-29 00:34:16.677600 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:34:16.677611 | orchestrator | ok: [testbed-manager] 2026-03-29 00:34:16.677622 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:34:16.677632 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:34:16.677642 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:34:16.677653 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:34:16.677663 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:34:16.677674 | orchestrator | 2026-03-29 00:34:16.677685 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-29 00:34:16.677696 | orchestrator | Sunday 29 March 2026 00:34:16 +0000 (0:00:03.068) 0:05:32.897 ********** 2026-03-29 00:34:16.677707 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-29 00:34:16.677718 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-29 00:34:16.677728 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-29 00:34:16.677739 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-29 00:34:16.677750 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-29 00:34:16.677760 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:34:16.677771 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-29 00:34:16.677781 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-29 00:34:16.677792 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-29 00:34:16.677803 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-29 00:34:16.677813 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:34:16.677823 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-29 00:34:16.677834 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-29 00:34:16.677844 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-29 00:34:16.677855 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:34:16.677865 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-29 00:34:16.677883 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-29 00:35:19.114894 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-29 00:35:19.114976 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:19.114983 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-29 00:35:19.114988 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-29 00:35:19.114993 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-29 00:35:19.114997 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:19.115001 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:19.115006 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-29 00:35:19.115010 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-29 00:35:19.115015 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-29 00:35:19.115019 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:19.115023 | orchestrator | 2026-03-29 00:35:19.115028 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-29 00:35:19.115033 | orchestrator | Sunday 29 March 2026 00:34:16 +0000 (0:00:00.614) 0:05:33.512 ********** 2026-03-29 00:35:19.115038 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115042 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115046 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115050 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115055 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115059 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115063 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115082 | orchestrator | 2026-03-29 00:35:19.115087 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-29 00:35:19.115091 | orchestrator | Sunday 29 March 2026 00:34:23 +0000 (0:00:06.780) 0:05:40.292 ********** 2026-03-29 00:35:19.115095 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115099 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115103 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115108 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115112 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115116 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115120 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115124 | orchestrator | 2026-03-29 00:35:19.115128 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-29 00:35:19.115132 | orchestrator | Sunday 29 March 2026 00:34:24 +0000 (0:00:01.031) 0:05:41.324 ********** 2026-03-29 00:35:19.115136 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115140 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115144 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115148 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115153 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115157 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115161 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115165 | orchestrator | 2026-03-29 00:35:19.115169 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-29 00:35:19.115173 | orchestrator | Sunday 29 March 2026 00:34:33 +0000 (0:00:08.888) 0:05:50.212 ********** 2026-03-29 00:35:19.115177 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:19.115181 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115185 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115190 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115194 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115198 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115202 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115206 | orchestrator | 2026-03-29 00:35:19.115211 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-29 00:35:19.115215 | orchestrator | Sunday 29 March 2026 00:34:36 +0000 (0:00:03.376) 0:05:53.589 ********** 2026-03-29 00:35:19.115219 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115223 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115228 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115280 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115284 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115289 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115293 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115297 | orchestrator | 2026-03-29 00:35:19.115301 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-29 00:35:19.115305 | orchestrator | Sunday 29 March 2026 00:34:38 +0000 (0:00:01.293) 0:05:54.883 ********** 2026-03-29 00:35:19.115310 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115314 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115318 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115322 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115326 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115330 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115335 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115339 | orchestrator | 2026-03-29 00:35:19.115343 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-29 00:35:19.115347 | orchestrator | Sunday 29 March 2026 00:34:39 +0000 (0:00:01.530) 0:05:56.413 ********** 2026-03-29 00:35:19.115352 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:19.115356 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:19.115360 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:19.115364 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:19.115372 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:19.115376 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:19.115380 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:19.115384 | orchestrator | 2026-03-29 00:35:19.115388 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-29 00:35:19.115392 | orchestrator | Sunday 29 March 2026 00:34:40 +0000 (0:00:00.627) 0:05:57.041 ********** 2026-03-29 00:35:19.115397 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115401 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115405 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115409 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115413 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115417 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115421 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115425 | orchestrator | 2026-03-29 00:35:19.115429 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-29 00:35:19.115443 | orchestrator | Sunday 29 March 2026 00:34:50 +0000 (0:00:10.179) 0:06:07.220 ********** 2026-03-29 00:35:19.115448 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:19.115452 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115456 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115460 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115464 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115469 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115474 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115478 | orchestrator | 2026-03-29 00:35:19.115483 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-29 00:35:19.115488 | orchestrator | Sunday 29 March 2026 00:34:51 +0000 (0:00:00.946) 0:06:08.167 ********** 2026-03-29 00:35:19.115493 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115498 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115502 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115507 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115511 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115516 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115521 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115525 | orchestrator | 2026-03-29 00:35:19.115530 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-29 00:35:19.115535 | orchestrator | Sunday 29 March 2026 00:35:00 +0000 (0:00:09.222) 0:06:17.389 ********** 2026-03-29 00:35:19.115540 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115545 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115549 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115554 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115558 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115563 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115568 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115572 | orchestrator | 2026-03-29 00:35:19.115576 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-29 00:35:19.115581 | orchestrator | Sunday 29 March 2026 00:35:11 +0000 (0:00:11.079) 0:06:28.469 ********** 2026-03-29 00:35:19.115585 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-29 00:35:19.115589 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-29 00:35:19.115593 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-29 00:35:19.115598 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-29 00:35:19.115602 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-29 00:35:19.115606 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-29 00:35:19.115610 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-29 00:35:19.115614 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-29 00:35:19.115618 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-29 00:35:19.115622 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-29 00:35:19.115629 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-29 00:35:19.115663 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-29 00:35:19.115667 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-29 00:35:19.115671 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-29 00:35:19.115675 | orchestrator | 2026-03-29 00:35:19.115680 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-29 00:35:19.115684 | orchestrator | Sunday 29 March 2026 00:35:13 +0000 (0:00:01.192) 0:06:29.661 ********** 2026-03-29 00:35:19.115690 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:19.115694 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:19.115698 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:19.115702 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:19.115706 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:19.115710 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:19.115714 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:19.115719 | orchestrator | 2026-03-29 00:35:19.115723 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-29 00:35:19.115727 | orchestrator | Sunday 29 March 2026 00:35:13 +0000 (0:00:00.515) 0:06:30.177 ********** 2026-03-29 00:35:19.115731 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:19.115735 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:19.115739 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:19.115743 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:19.115747 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:19.115752 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:19.115756 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:19.115760 | orchestrator | 2026-03-29 00:35:19.115764 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-29 00:35:19.115769 | orchestrator | Sunday 29 March 2026 00:35:18 +0000 (0:00:04.558) 0:06:34.735 ********** 2026-03-29 00:35:19.115773 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:19.115777 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:19.115781 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:19.115786 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:19.115790 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:19.115794 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:19.115798 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:19.115802 | orchestrator | 2026-03-29 00:35:19.115806 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-29 00:35:19.115811 | orchestrator | Sunday 29 March 2026 00:35:18 +0000 (0:00:00.525) 0:06:35.261 ********** 2026-03-29 00:35:19.115815 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-29 00:35:19.115820 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-29 00:35:19.115824 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:19.115828 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-29 00:35:19.115832 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-29 00:35:19.115836 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:19.115840 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-29 00:35:19.115844 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-29 00:35:19.115848 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:19.115855 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-29 00:35:37.590519 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-29 00:35:37.590656 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:37.590671 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-29 00:35:37.590685 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-29 00:35:37.590704 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:37.590763 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-29 00:35:37.590783 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-29 00:35:37.590802 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:37.590820 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-29 00:35:37.590838 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-29 00:35:37.590857 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:37.590875 | orchestrator | 2026-03-29 00:35:37.590896 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-29 00:35:37.590918 | orchestrator | Sunday 29 March 2026 00:35:19 +0000 (0:00:00.704) 0:06:35.965 ********** 2026-03-29 00:35:37.590936 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:37.590954 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:37.590965 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:37.590976 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:37.590987 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:37.590997 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:37.591008 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:37.591018 | orchestrator | 2026-03-29 00:35:37.591030 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-29 00:35:37.591044 | orchestrator | Sunday 29 March 2026 00:35:19 +0000 (0:00:00.496) 0:06:36.462 ********** 2026-03-29 00:35:37.591056 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:37.591068 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:37.591080 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:37.591092 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:37.591104 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:37.591134 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:37.591146 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:37.591170 | orchestrator | 2026-03-29 00:35:37.591183 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-29 00:35:37.591218 | orchestrator | Sunday 29 March 2026 00:35:20 +0000 (0:00:00.497) 0:06:36.959 ********** 2026-03-29 00:35:37.591231 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:37.591243 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:35:37.591255 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:35:37.591267 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:35:37.591279 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:35:37.591292 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:35:37.591304 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:35:37.591317 | orchestrator | 2026-03-29 00:35:37.591330 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-29 00:35:37.591342 | orchestrator | Sunday 29 March 2026 00:35:20 +0000 (0:00:00.559) 0:06:37.519 ********** 2026-03-29 00:35:37.591354 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.591365 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:37.591376 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:37.591387 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:37.591398 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:37.591409 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:37.591419 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:37.591430 | orchestrator | 2026-03-29 00:35:37.591441 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-29 00:35:37.591452 | orchestrator | Sunday 29 March 2026 00:35:22 +0000 (0:00:01.820) 0:06:39.339 ********** 2026-03-29 00:35:37.591464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:35:37.591478 | orchestrator | 2026-03-29 00:35:37.591489 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-29 00:35:37.591500 | orchestrator | Sunday 29 March 2026 00:35:23 +0000 (0:00:00.838) 0:06:40.177 ********** 2026-03-29 00:35:37.591529 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.591541 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:37.591551 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:37.591562 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:37.591573 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:37.591583 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:37.591594 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:37.591604 | orchestrator | 2026-03-29 00:35:37.591615 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-29 00:35:37.591626 | orchestrator | Sunday 29 March 2026 00:35:24 +0000 (0:00:00.795) 0:06:40.973 ********** 2026-03-29 00:35:37.591636 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.591647 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:37.591658 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:37.591668 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:37.591679 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:37.591689 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:37.591700 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:37.591710 | orchestrator | 2026-03-29 00:35:37.591721 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-29 00:35:37.591732 | orchestrator | Sunday 29 March 2026 00:35:25 +0000 (0:00:00.824) 0:06:41.798 ********** 2026-03-29 00:35:37.591742 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.591753 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:37.591763 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:37.591774 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:37.591784 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:37.591795 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:37.591805 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:37.591816 | orchestrator | 2026-03-29 00:35:37.591827 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-29 00:35:37.591860 | orchestrator | Sunday 29 March 2026 00:35:26 +0000 (0:00:01.528) 0:06:43.327 ********** 2026-03-29 00:35:37.591871 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:35:37.591882 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:37.591893 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:37.591904 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:37.591914 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:37.591925 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:37.591936 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:37.591946 | orchestrator | 2026-03-29 00:35:37.591957 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-29 00:35:37.591968 | orchestrator | Sunday 29 March 2026 00:35:28 +0000 (0:00:01.302) 0:06:44.629 ********** 2026-03-29 00:35:37.591979 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.591989 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:37.592000 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:37.592010 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:37.592021 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:37.592032 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:37.592042 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:37.592053 | orchestrator | 2026-03-29 00:35:37.592063 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-29 00:35:37.592074 | orchestrator | Sunday 29 March 2026 00:35:29 +0000 (0:00:01.251) 0:06:45.881 ********** 2026-03-29 00:35:37.592085 | orchestrator | changed: [testbed-manager] 2026-03-29 00:35:37.592095 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:35:37.592106 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:35:37.592116 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:35:37.592127 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:35:37.592137 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:35:37.592148 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:35:37.592158 | orchestrator | 2026-03-29 00:35:37.592176 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-29 00:35:37.592188 | orchestrator | Sunday 29 March 2026 00:35:30 +0000 (0:00:01.364) 0:06:47.245 ********** 2026-03-29 00:35:37.592231 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:35:37.592242 | orchestrator | 2026-03-29 00:35:37.592253 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-29 00:35:37.592264 | orchestrator | Sunday 29 March 2026 00:35:31 +0000 (0:00:00.984) 0:06:48.230 ********** 2026-03-29 00:35:37.592274 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.592285 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:37.592296 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:37.592306 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:37.592317 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:37.592327 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:37.592338 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:37.592349 | orchestrator | 2026-03-29 00:35:37.592360 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-29 00:35:37.592371 | orchestrator | Sunday 29 March 2026 00:35:32 +0000 (0:00:01.290) 0:06:49.520 ********** 2026-03-29 00:35:37.592381 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.592392 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:37.592403 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:37.592413 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:37.592424 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:37.592452 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:37.592463 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:37.592473 | orchestrator | 2026-03-29 00:35:37.592484 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-29 00:35:37.592495 | orchestrator | Sunday 29 March 2026 00:35:33 +0000 (0:00:01.078) 0:06:50.599 ********** 2026-03-29 00:35:37.592506 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.592517 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:37.592527 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:37.592538 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:37.592549 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:37.592559 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:37.592569 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:37.592580 | orchestrator | 2026-03-29 00:35:37.592591 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-29 00:35:37.592602 | orchestrator | Sunday 29 March 2026 00:35:35 +0000 (0:00:01.102) 0:06:51.702 ********** 2026-03-29 00:35:37.592612 | orchestrator | ok: [testbed-manager] 2026-03-29 00:35:37.592623 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:35:37.592633 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:35:37.592644 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:35:37.592655 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:35:37.592665 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:35:37.592676 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:35:37.592686 | orchestrator | 2026-03-29 00:35:37.592697 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-29 00:35:37.592708 | orchestrator | Sunday 29 March 2026 00:35:36 +0000 (0:00:01.309) 0:06:53.012 ********** 2026-03-29 00:35:37.592719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:35:37.592730 | orchestrator | 2026-03-29 00:35:37.592741 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:35:37.592751 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.871) 0:06:53.883 ********** 2026-03-29 00:35:37.592762 | orchestrator | 2026-03-29 00:35:37.592773 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:35:37.592791 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.039) 0:06:53.923 ********** 2026-03-29 00:35:37.592802 | orchestrator | 2026-03-29 00:35:37.592812 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:35:37.592823 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.043) 0:06:53.966 ********** 2026-03-29 00:35:37.592834 | orchestrator | 2026-03-29 00:35:37.592844 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:35:37.592863 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.037) 0:06:54.004 ********** 2026-03-29 00:36:02.975928 | orchestrator | 2026-03-29 00:36:02.976073 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:36:02.976090 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.037) 0:06:54.041 ********** 2026-03-29 00:36:02.976102 | orchestrator | 2026-03-29 00:36:02.976114 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:36:02.976125 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.059) 0:06:54.100 ********** 2026-03-29 00:36:02.976184 | orchestrator | 2026-03-29 00:36:02.976196 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-29 00:36:02.976208 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.039) 0:06:54.140 ********** 2026-03-29 00:36:02.976219 | orchestrator | 2026-03-29 00:36:02.976230 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-29 00:36:02.976241 | orchestrator | Sunday 29 March 2026 00:35:37 +0000 (0:00:00.039) 0:06:54.179 ********** 2026-03-29 00:36:02.976252 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:02.976265 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:02.976275 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:02.976286 | orchestrator | 2026-03-29 00:36:02.976297 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-29 00:36:02.976308 | orchestrator | Sunday 29 March 2026 00:35:38 +0000 (0:00:01.117) 0:06:55.296 ********** 2026-03-29 00:36:02.976319 | orchestrator | changed: [testbed-manager] 2026-03-29 00:36:02.976332 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:02.976343 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:02.976354 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:02.976364 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:02.976375 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:02.976386 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:02.976397 | orchestrator | 2026-03-29 00:36:02.976408 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-29 00:36:02.976419 | orchestrator | Sunday 29 March 2026 00:35:40 +0000 (0:00:01.409) 0:06:56.706 ********** 2026-03-29 00:36:02.976429 | orchestrator | changed: [testbed-manager] 2026-03-29 00:36:02.976442 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:02.976455 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:02.976468 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:02.976480 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:02.976493 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:02.976506 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:02.976518 | orchestrator | 2026-03-29 00:36:02.976531 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-29 00:36:02.976544 | orchestrator | Sunday 29 March 2026 00:35:41 +0000 (0:00:01.199) 0:06:57.906 ********** 2026-03-29 00:36:02.976557 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:02.976568 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:02.976578 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:02.976589 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:02.976600 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:02.976611 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:02.976621 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:02.976632 | orchestrator | 2026-03-29 00:36:02.976643 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-29 00:36:02.976654 | orchestrator | Sunday 29 March 2026 00:35:43 +0000 (0:00:02.433) 0:07:00.340 ********** 2026-03-29 00:36:02.976696 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:02.976708 | orchestrator | 2026-03-29 00:36:02.976736 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-29 00:36:02.976748 | orchestrator | Sunday 29 March 2026 00:35:43 +0000 (0:00:00.112) 0:07:00.452 ********** 2026-03-29 00:36:02.976759 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:02.976770 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:02.976780 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:02.976791 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:02.976802 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:02.976812 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:02.976823 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:02.976834 | orchestrator | 2026-03-29 00:36:02.976845 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-29 00:36:02.976857 | orchestrator | Sunday 29 March 2026 00:35:44 +0000 (0:00:00.954) 0:07:01.407 ********** 2026-03-29 00:36:02.976868 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:02.976879 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:02.976890 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:36:02.976900 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:36:02.976911 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:36:02.976921 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:36:02.976932 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:36:02.976942 | orchestrator | 2026-03-29 00:36:02.976953 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-29 00:36:02.976964 | orchestrator | Sunday 29 March 2026 00:35:45 +0000 (0:00:00.534) 0:07:01.942 ********** 2026-03-29 00:36:02.976977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:36:02.976991 | orchestrator | 2026-03-29 00:36:02.977001 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-29 00:36:02.977012 | orchestrator | Sunday 29 March 2026 00:35:46 +0000 (0:00:01.032) 0:07:02.975 ********** 2026-03-29 00:36:02.977023 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:02.977034 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:02.977044 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:02.977055 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:02.977066 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:02.977076 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:02.977088 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:02.977099 | orchestrator | 2026-03-29 00:36:02.977110 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-29 00:36:02.977121 | orchestrator | Sunday 29 March 2026 00:35:47 +0000 (0:00:00.808) 0:07:03.784 ********** 2026-03-29 00:36:02.977132 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-29 00:36:02.977185 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-29 00:36:02.977198 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-29 00:36:02.977209 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-29 00:36:02.977220 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-29 00:36:02.977230 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-29 00:36:02.977241 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-29 00:36:02.977252 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-29 00:36:02.977263 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-29 00:36:02.977274 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-29 00:36:02.977284 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-29 00:36:02.977295 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-29 00:36:02.977316 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-29 00:36:02.977327 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-29 00:36:02.977338 | orchestrator | 2026-03-29 00:36:02.977349 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-29 00:36:02.977360 | orchestrator | Sunday 29 March 2026 00:35:49 +0000 (0:00:02.305) 0:07:06.090 ********** 2026-03-29 00:36:02.977370 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:02.977381 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:02.977392 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:36:02.977403 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:36:02.977413 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:36:02.977424 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:36:02.977435 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:36:02.977446 | orchestrator | 2026-03-29 00:36:02.977457 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-29 00:36:02.977468 | orchestrator | Sunday 29 March 2026 00:35:50 +0000 (0:00:00.672) 0:07:06.762 ********** 2026-03-29 00:36:02.977480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:36:02.977493 | orchestrator | 2026-03-29 00:36:02.977504 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-29 00:36:02.977515 | orchestrator | Sunday 29 March 2026 00:35:50 +0000 (0:00:00.806) 0:07:07.569 ********** 2026-03-29 00:36:02.977526 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:02.977537 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:02.977548 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:02.977559 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:02.977570 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:02.977580 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:02.977591 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:02.977602 | orchestrator | 2026-03-29 00:36:02.977613 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-29 00:36:02.977624 | orchestrator | Sunday 29 March 2026 00:35:51 +0000 (0:00:00.833) 0:07:08.403 ********** 2026-03-29 00:36:02.977640 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:02.977651 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:02.977662 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:02.977673 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:02.977684 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:02.977694 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:02.977705 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:02.977716 | orchestrator | 2026-03-29 00:36:02.977727 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-29 00:36:02.977738 | orchestrator | Sunday 29 March 2026 00:35:52 +0000 (0:00:00.996) 0:07:09.399 ********** 2026-03-29 00:36:02.977748 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:02.977759 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:02.977770 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:36:02.977780 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:36:02.977791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:36:02.977802 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:36:02.977813 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:36:02.977823 | orchestrator | 2026-03-29 00:36:02.977834 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-29 00:36:02.977845 | orchestrator | Sunday 29 March 2026 00:35:53 +0000 (0:00:00.498) 0:07:09.898 ********** 2026-03-29 00:36:02.977856 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:02.977867 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:02.977878 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:02.977888 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:02.977899 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:02.977917 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:02.977928 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:02.977939 | orchestrator | 2026-03-29 00:36:02.977949 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-29 00:36:02.977960 | orchestrator | Sunday 29 March 2026 00:35:54 +0000 (0:00:01.479) 0:07:11.378 ********** 2026-03-29 00:36:02.977971 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:02.977982 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:02.977993 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:36:02.978003 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:36:02.978014 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:36:02.978106 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:36:02.978117 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:36:02.978128 | orchestrator | 2026-03-29 00:36:02.978159 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-29 00:36:02.978171 | orchestrator | Sunday 29 March 2026 00:35:55 +0000 (0:00:00.522) 0:07:11.901 ********** 2026-03-29 00:36:02.978194 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:02.978206 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:02.978217 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:02.978228 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:02.978238 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:02.978249 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:02.978268 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:35.006157 | orchestrator | 2026-03-29 00:36:35.006270 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-29 00:36:35.006287 | orchestrator | Sunday 29 March 2026 00:36:02 +0000 (0:00:07.666) 0:07:19.567 ********** 2026-03-29 00:36:35.006300 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.006312 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:35.006324 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:35.006335 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:35.006345 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:35.006356 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:35.006367 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:35.006378 | orchestrator | 2026-03-29 00:36:35.006389 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-29 00:36:35.006400 | orchestrator | Sunday 29 March 2026 00:36:04 +0000 (0:00:01.554) 0:07:21.122 ********** 2026-03-29 00:36:35.006411 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.006421 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:35.006432 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:35.006443 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:35.006453 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:35.006464 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:35.006475 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:35.006485 | orchestrator | 2026-03-29 00:36:35.006496 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-29 00:36:35.006507 | orchestrator | Sunday 29 March 2026 00:36:06 +0000 (0:00:01.702) 0:07:22.824 ********** 2026-03-29 00:36:35.006518 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.006528 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:35.006539 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:35.006549 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:35.006560 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:35.006571 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:35.006581 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:35.006592 | orchestrator | 2026-03-29 00:36:35.006603 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 00:36:35.006614 | orchestrator | Sunday 29 March 2026 00:36:07 +0000 (0:00:01.617) 0:07:24.442 ********** 2026-03-29 00:36:35.006624 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.006635 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.006646 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.006684 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.006695 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.006706 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.006717 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.006727 | orchestrator | 2026-03-29 00:36:35.006738 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 00:36:35.006749 | orchestrator | Sunday 29 March 2026 00:36:08 +0000 (0:00:00.837) 0:07:25.280 ********** 2026-03-29 00:36:35.006760 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:35.006771 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:35.006782 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:36:35.006793 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:36:35.006804 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:36:35.006815 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:36:35.006825 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:36:35.006836 | orchestrator | 2026-03-29 00:36:35.006847 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-29 00:36:35.006858 | orchestrator | Sunday 29 March 2026 00:36:09 +0000 (0:00:00.970) 0:07:26.250 ********** 2026-03-29 00:36:35.006868 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:35.006879 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:35.006889 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:36:35.006900 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:36:35.006910 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:36:35.006921 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:36:35.006932 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:36:35.006942 | orchestrator | 2026-03-29 00:36:35.006953 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-29 00:36:35.006964 | orchestrator | Sunday 29 March 2026 00:36:10 +0000 (0:00:00.527) 0:07:26.778 ********** 2026-03-29 00:36:35.006975 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.007002 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.007014 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.007024 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.007035 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.007046 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.007057 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.007067 | orchestrator | 2026-03-29 00:36:35.007108 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-29 00:36:35.007120 | orchestrator | Sunday 29 March 2026 00:36:10 +0000 (0:00:00.507) 0:07:27.286 ********** 2026-03-29 00:36:35.007131 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.007142 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.007152 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.007163 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.007174 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.007185 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.007196 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.007206 | orchestrator | 2026-03-29 00:36:35.007217 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-29 00:36:35.007228 | orchestrator | Sunday 29 March 2026 00:36:11 +0000 (0:00:00.487) 0:07:27.773 ********** 2026-03-29 00:36:35.007239 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.007249 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.007260 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.007271 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.007281 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.007292 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.007302 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.007313 | orchestrator | 2026-03-29 00:36:35.007324 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-29 00:36:35.007335 | orchestrator | Sunday 29 March 2026 00:36:11 +0000 (0:00:00.590) 0:07:28.364 ********** 2026-03-29 00:36:35.007346 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.007356 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.007376 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.007387 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.007397 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.007408 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.007418 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.007429 | orchestrator | 2026-03-29 00:36:35.007457 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-29 00:36:35.007469 | orchestrator | Sunday 29 March 2026 00:36:17 +0000 (0:00:05.661) 0:07:34.026 ********** 2026-03-29 00:36:35.007480 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:36:35.007491 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:36:35.007501 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:36:35.007512 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:36:35.007523 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:36:35.007533 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:36:35.007544 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:36:35.007554 | orchestrator | 2026-03-29 00:36:35.007565 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-29 00:36:35.007576 | orchestrator | Sunday 29 March 2026 00:36:17 +0000 (0:00:00.448) 0:07:34.474 ********** 2026-03-29 00:36:35.007589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:36:35.007603 | orchestrator | 2026-03-29 00:36:35.007614 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-29 00:36:35.007625 | orchestrator | Sunday 29 March 2026 00:36:18 +0000 (0:00:00.864) 0:07:35.339 ********** 2026-03-29 00:36:35.007636 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.007646 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.007657 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.007668 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.007678 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.007689 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.007699 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.007710 | orchestrator | 2026-03-29 00:36:35.007721 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-29 00:36:35.007732 | orchestrator | Sunday 29 March 2026 00:36:20 +0000 (0:00:01.974) 0:07:37.313 ********** 2026-03-29 00:36:35.007742 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.007753 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.007764 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.007774 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.007785 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.007796 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.007806 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.007817 | orchestrator | 2026-03-29 00:36:35.007828 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-29 00:36:35.007838 | orchestrator | Sunday 29 March 2026 00:36:21 +0000 (0:00:01.026) 0:07:38.340 ********** 2026-03-29 00:36:35.007849 | orchestrator | ok: [testbed-manager] 2026-03-29 00:36:35.007860 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:36:35.007870 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:36:35.007881 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:36:35.007891 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:36:35.007902 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:36:35.007913 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:36:35.007923 | orchestrator | 2026-03-29 00:36:35.007934 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-29 00:36:35.007945 | orchestrator | Sunday 29 March 2026 00:36:22 +0000 (0:00:00.717) 0:07:39.058 ********** 2026-03-29 00:36:35.007961 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:36:35.007976 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:36:35.007994 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:36:35.008005 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:36:35.008016 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:36:35.008026 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:36:35.008037 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-29 00:36:35.008048 | orchestrator | 2026-03-29 00:36:35.008059 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-29 00:36:35.008070 | orchestrator | Sunday 29 March 2026 00:36:24 +0000 (0:00:01.698) 0:07:40.756 ********** 2026-03-29 00:36:35.008098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:36:35.008110 | orchestrator | 2026-03-29 00:36:35.008121 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-29 00:36:35.008131 | orchestrator | Sunday 29 March 2026 00:36:24 +0000 (0:00:00.790) 0:07:41.547 ********** 2026-03-29 00:36:35.008142 | orchestrator | changed: [testbed-manager] 2026-03-29 00:36:35.008153 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:36:35.008164 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:36:35.008175 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:36:35.008185 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:36:35.008196 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:36:35.008206 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:36:35.008217 | orchestrator | 2026-03-29 00:36:35.008234 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-29 00:37:04.921928 | orchestrator | Sunday 29 March 2026 00:36:34 +0000 (0:00:10.047) 0:07:51.595 ********** 2026-03-29 00:37:04.922012 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:04.922104 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:04.922120 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:04.922127 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:04.922134 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:04.922141 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:04.922148 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:04.922154 | orchestrator | 2026-03-29 00:37:04.922171 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-29 00:37:04.922179 | orchestrator | Sunday 29 March 2026 00:36:36 +0000 (0:00:01.949) 0:07:53.544 ********** 2026-03-29 00:37:04.922193 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:04.922200 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:04.922207 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:04.922213 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:04.922220 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:04.922227 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:04.922234 | orchestrator | 2026-03-29 00:37:04.922240 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-29 00:37:04.922247 | orchestrator | Sunday 29 March 2026 00:36:38 +0000 (0:00:01.233) 0:07:54.777 ********** 2026-03-29 00:37:04.922254 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.922261 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.922267 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.922274 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.922281 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.922307 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.922314 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.922320 | orchestrator | 2026-03-29 00:37:04.922327 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-29 00:37:04.922334 | orchestrator | 2026-03-29 00:37:04.922340 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-29 00:37:04.922347 | orchestrator | Sunday 29 March 2026 00:36:39 +0000 (0:00:01.226) 0:07:56.003 ********** 2026-03-29 00:37:04.922354 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:04.922360 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:04.922367 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:04.922373 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:04.922380 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:04.922386 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:04.922393 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:04.922399 | orchestrator | 2026-03-29 00:37:04.922406 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-29 00:37:04.922413 | orchestrator | 2026-03-29 00:37:04.922420 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-29 00:37:04.922427 | orchestrator | Sunday 29 March 2026 00:36:40 +0000 (0:00:00.693) 0:07:56.697 ********** 2026-03-29 00:37:04.922433 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.922440 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.922447 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.922453 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.922460 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.922467 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.922477 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.922489 | orchestrator | 2026-03-29 00:37:04.922501 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-29 00:37:04.922527 | orchestrator | Sunday 29 March 2026 00:36:41 +0000 (0:00:01.299) 0:07:57.997 ********** 2026-03-29 00:37:04.922540 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:04.922553 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:04.922564 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:04.922576 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:04.922588 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:04.922600 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:04.922611 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:04.922623 | orchestrator | 2026-03-29 00:37:04.922635 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-29 00:37:04.922648 | orchestrator | Sunday 29 March 2026 00:36:42 +0000 (0:00:01.328) 0:07:59.325 ********** 2026-03-29 00:37:04.922661 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:37:04.922673 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:37:04.922685 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:37:04.922696 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:37:04.922707 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:37:04.922718 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:37:04.922729 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:37:04.922740 | orchestrator | 2026-03-29 00:37:04.922752 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-29 00:37:04.922765 | orchestrator | Sunday 29 March 2026 00:36:43 +0000 (0:00:00.484) 0:07:59.809 ********** 2026-03-29 00:37:04.922779 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:37:04.922794 | orchestrator | 2026-03-29 00:37:04.922807 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-29 00:37:04.922820 | orchestrator | Sunday 29 March 2026 00:36:44 +0000 (0:00:01.022) 0:08:00.832 ********** 2026-03-29 00:37:04.922832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:37:04.922852 | orchestrator | 2026-03-29 00:37:04.922860 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-29 00:37:04.922867 | orchestrator | Sunday 29 March 2026 00:36:45 +0000 (0:00:00.819) 0:08:01.652 ********** 2026-03-29 00:37:04.922873 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.922880 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.922887 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.922893 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.922900 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.922907 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.922913 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.922920 | orchestrator | 2026-03-29 00:37:04.922941 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-29 00:37:04.922949 | orchestrator | Sunday 29 March 2026 00:36:54 +0000 (0:00:09.470) 0:08:11.123 ********** 2026-03-29 00:37:04.922955 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.922962 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.922968 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.922975 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.922981 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.922988 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.922994 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.923001 | orchestrator | 2026-03-29 00:37:04.923007 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-29 00:37:04.923014 | orchestrator | Sunday 29 March 2026 00:36:55 +0000 (0:00:00.852) 0:08:11.975 ********** 2026-03-29 00:37:04.923037 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.923047 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.923054 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.923061 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.923068 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.923075 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.923082 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.923089 | orchestrator | 2026-03-29 00:37:04.923097 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-29 00:37:04.923104 | orchestrator | Sunday 29 March 2026 00:36:56 +0000 (0:00:01.206) 0:08:13.182 ********** 2026-03-29 00:37:04.923111 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.923118 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.923125 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.923132 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.923139 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.923146 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.923153 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.923160 | orchestrator | 2026-03-29 00:37:04.923168 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-29 00:37:04.923175 | orchestrator | Sunday 29 March 2026 00:36:58 +0000 (0:00:01.685) 0:08:14.868 ********** 2026-03-29 00:37:04.923182 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.923189 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.923196 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.923203 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.923210 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.923217 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.923225 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.923232 | orchestrator | 2026-03-29 00:37:04.923239 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-29 00:37:04.923246 | orchestrator | Sunday 29 March 2026 00:36:59 +0000 (0:00:01.103) 0:08:15.971 ********** 2026-03-29 00:37:04.923253 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.923260 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.923274 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.923281 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.923288 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.923295 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.923302 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.923309 | orchestrator | 2026-03-29 00:37:04.923316 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-29 00:37:04.923323 | orchestrator | 2026-03-29 00:37:04.923336 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-29 00:37:04.923344 | orchestrator | Sunday 29 March 2026 00:37:00 +0000 (0:00:00.975) 0:08:16.947 ********** 2026-03-29 00:37:04.923351 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:37:04.923359 | orchestrator | 2026-03-29 00:37:04.923366 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-29 00:37:04.923373 | orchestrator | Sunday 29 March 2026 00:37:01 +0000 (0:00:00.691) 0:08:17.639 ********** 2026-03-29 00:37:04.923380 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:04.923387 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:04.923394 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:04.923401 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:04.923408 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:04.923415 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:04.923422 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:04.923430 | orchestrator | 2026-03-29 00:37:04.923437 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-29 00:37:04.923444 | orchestrator | Sunday 29 March 2026 00:37:01 +0000 (0:00:00.928) 0:08:18.567 ********** 2026-03-29 00:37:04.923451 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:04.923458 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:04.923465 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:04.923472 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:04.923480 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:04.923487 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:04.923494 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:04.923501 | orchestrator | 2026-03-29 00:37:04.923508 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-29 00:37:04.923515 | orchestrator | Sunday 29 March 2026 00:37:03 +0000 (0:00:01.086) 0:08:19.654 ********** 2026-03-29 00:37:04.923523 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:37:04.923530 | orchestrator | 2026-03-29 00:37:04.923537 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-29 00:37:04.923544 | orchestrator | Sunday 29 March 2026 00:37:04 +0000 (0:00:00.963) 0:08:20.617 ********** 2026-03-29 00:37:04.923551 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:04.923558 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:04.923565 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:04.923572 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:04.923579 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:04.923586 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:04.923593 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:04.923600 | orchestrator | 2026-03-29 00:37:04.923613 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-29 00:37:06.461548 | orchestrator | Sunday 29 March 2026 00:37:04 +0000 (0:00:00.900) 0:08:21.518 ********** 2026-03-29 00:37:06.461655 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:06.461671 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:06.461684 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:06.461695 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:06.461705 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:06.461717 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:06.461727 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:06.461765 | orchestrator | 2026-03-29 00:37:06.461777 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:37:06.461790 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-29 00:37:06.461802 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 00:37:06.461813 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 00:37:06.461824 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-29 00:37:06.461835 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-29 00:37:06.461846 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 00:37:06.461857 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-29 00:37:06.461867 | orchestrator | 2026-03-29 00:37:06.461879 | orchestrator | 2026-03-29 00:37:06.461890 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:37:06.461901 | orchestrator | Sunday 29 March 2026 00:37:05 +0000 (0:00:01.061) 0:08:22.579 ********** 2026-03-29 00:37:06.461912 | orchestrator | =============================================================================== 2026-03-29 00:37:06.461922 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.22s 2026-03-29 00:37:06.461933 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.48s 2026-03-29 00:37:06.461944 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.02s 2026-03-29 00:37:06.461955 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.50s 2026-03-29 00:37:06.461966 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.97s 2026-03-29 00:37:06.461990 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.41s 2026-03-29 00:37:06.462001 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.08s 2026-03-29 00:37:06.462013 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.18s 2026-03-29 00:37:06.462155 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.05s 2026-03-29 00:37:06.462168 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.81s 2026-03-29 00:37:06.462180 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.54s 2026-03-29 00:37:06.462194 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.47s 2026-03-29 00:37:06.462206 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.22s 2026-03-29 00:37:06.462218 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.89s 2026-03-29 00:37:06.462231 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.89s 2026-03-29 00:37:06.462243 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.67s 2026-03-29 00:37:06.462255 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.78s 2026-03-29 00:37:06.462268 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.26s 2026-03-29 00:37:06.462281 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.06s 2026-03-29 00:37:06.462293 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.66s 2026-03-29 00:37:06.767427 | orchestrator | + osism apply fail2ban 2026-03-29 00:37:19.238836 | orchestrator | 2026-03-29 00:37:19 | INFO  | Task eca0fac3-25e9-405d-a40b-4e508e7efdc1 (fail2ban) was prepared for execution. 2026-03-29 00:37:19.238937 | orchestrator | 2026-03-29 00:37:19 | INFO  | It takes a moment until task eca0fac3-25e9-405d-a40b-4e508e7efdc1 (fail2ban) has been started and output is visible here. 2026-03-29 00:37:41.988027 | orchestrator | 2026-03-29 00:37:41.988162 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-29 00:37:41.988181 | orchestrator | 2026-03-29 00:37:41.988237 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-29 00:37:41.988252 | orchestrator | Sunday 29 March 2026 00:37:23 +0000 (0:00:00.229) 0:00:00.229 ********** 2026-03-29 00:37:41.988264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:37:41.988278 | orchestrator | 2026-03-29 00:37:41.988290 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-29 00:37:41.988301 | orchestrator | Sunday 29 March 2026 00:37:24 +0000 (0:00:01.003) 0:00:01.232 ********** 2026-03-29 00:37:41.988312 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:41.988323 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:41.988334 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:41.988344 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:41.988355 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:41.988365 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:41.988376 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:41.988388 | orchestrator | 2026-03-29 00:37:41.988399 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-29 00:37:41.988410 | orchestrator | Sunday 29 March 2026 00:37:37 +0000 (0:00:12.831) 0:00:14.064 ********** 2026-03-29 00:37:41.988421 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:41.988431 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:41.988442 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:41.988453 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:41.988463 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:41.988474 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:41.988484 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:41.988495 | orchestrator | 2026-03-29 00:37:41.988507 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-29 00:37:41.988520 | orchestrator | Sunday 29 March 2026 00:37:38 +0000 (0:00:01.480) 0:00:15.544 ********** 2026-03-29 00:37:41.988532 | orchestrator | ok: [testbed-manager] 2026-03-29 00:37:41.988545 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:37:41.988558 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:37:41.988570 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:37:41.988582 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:37:41.988595 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:37:41.988607 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:37:41.988618 | orchestrator | 2026-03-29 00:37:41.988631 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-29 00:37:41.988643 | orchestrator | Sunday 29 March 2026 00:37:39 +0000 (0:00:01.476) 0:00:17.021 ********** 2026-03-29 00:37:41.988655 | orchestrator | changed: [testbed-manager] 2026-03-29 00:37:41.988667 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:37:41.988679 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:37:41.988691 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:37:41.988703 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:37:41.988716 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:37:41.988728 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:37:41.988740 | orchestrator | 2026-03-29 00:37:41.988752 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:37:41.988764 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:37:41.988804 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:37:41.988819 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:37:41.988831 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:37:41.988843 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:37:41.988857 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:37:41.988868 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:37:41.988879 | orchestrator | 2026-03-29 00:37:41.988890 | orchestrator | 2026-03-29 00:37:41.988900 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:37:41.988911 | orchestrator | Sunday 29 March 2026 00:37:41 +0000 (0:00:01.604) 0:00:18.626 ********** 2026-03-29 00:37:41.988921 | orchestrator | =============================================================================== 2026-03-29 00:37:41.988932 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.83s 2026-03-29 00:37:41.988943 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.60s 2026-03-29 00:37:41.988977 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.48s 2026-03-29 00:37:41.988989 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.48s 2026-03-29 00:37:41.989001 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.00s 2026-03-29 00:37:42.260917 | orchestrator | + osism apply network 2026-03-29 00:37:54.332236 | orchestrator | 2026-03-29 00:37:54 | INFO  | Task c7041da6-eac5-4ed2-95a8-283d887c54d7 (network) was prepared for execution. 2026-03-29 00:37:54.332343 | orchestrator | 2026-03-29 00:37:54 | INFO  | It takes a moment until task c7041da6-eac5-4ed2-95a8-283d887c54d7 (network) has been started and output is visible here. 2026-03-29 00:38:23.194812 | orchestrator | 2026-03-29 00:38:23.194975 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-29 00:38:23.194994 | orchestrator | 2026-03-29 00:38:23.195007 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-29 00:38:23.195018 | orchestrator | Sunday 29 March 2026 00:37:58 +0000 (0:00:00.271) 0:00:00.271 ********** 2026-03-29 00:38:23.195029 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.195041 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:23.195053 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:23.195064 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:23.195074 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:23.195085 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:23.195096 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:23.195106 | orchestrator | 2026-03-29 00:38:23.195117 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-29 00:38:23.195128 | orchestrator | Sunday 29 March 2026 00:37:59 +0000 (0:00:00.698) 0:00:00.969 ********** 2026-03-29 00:38:23.195140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:38:23.195154 | orchestrator | 2026-03-29 00:38:23.195166 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-29 00:38:23.195176 | orchestrator | Sunday 29 March 2026 00:38:00 +0000 (0:00:01.132) 0:00:02.102 ********** 2026-03-29 00:38:23.195211 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.195222 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:23.195233 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:23.195244 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:23.195254 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:23.195265 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:23.195275 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:23.195286 | orchestrator | 2026-03-29 00:38:23.195297 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-29 00:38:23.195308 | orchestrator | Sunday 29 March 2026 00:38:02 +0000 (0:00:02.458) 0:00:04.561 ********** 2026-03-29 00:38:23.195318 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.195329 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:23.195340 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:23.195354 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:23.195366 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:23.195378 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:23.195390 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:23.195402 | orchestrator | 2026-03-29 00:38:23.195415 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-29 00:38:23.195428 | orchestrator | Sunday 29 March 2026 00:38:04 +0000 (0:00:01.934) 0:00:06.495 ********** 2026-03-29 00:38:23.195441 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-29 00:38:23.195454 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-29 00:38:23.195466 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-29 00:38:23.195478 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-29 00:38:23.195490 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-29 00:38:23.195502 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-29 00:38:23.195515 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-29 00:38:23.195527 | orchestrator | 2026-03-29 00:38:23.195556 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-29 00:38:23.195569 | orchestrator | Sunday 29 March 2026 00:38:05 +0000 (0:00:01.011) 0:00:07.507 ********** 2026-03-29 00:38:23.195587 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 00:38:23.195600 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 00:38:23.195612 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:38:23.195625 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:38:23.195638 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 00:38:23.195650 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 00:38:23.195662 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 00:38:23.195674 | orchestrator | 2026-03-29 00:38:23.195686 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-29 00:38:23.195699 | orchestrator | Sunday 29 March 2026 00:38:08 +0000 (0:00:03.200) 0:00:10.708 ********** 2026-03-29 00:38:23.195710 | orchestrator | changed: [testbed-manager] 2026-03-29 00:38:23.195721 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:38:23.195731 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:38:23.195742 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:38:23.195753 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:38:23.195763 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:38:23.195774 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:38:23.195784 | orchestrator | 2026-03-29 00:38:23.195795 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-29 00:38:23.195806 | orchestrator | Sunday 29 March 2026 00:38:10 +0000 (0:00:01.511) 0:00:12.219 ********** 2026-03-29 00:38:23.195817 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:38:23.195827 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:38:23.195838 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 00:38:23.195849 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 00:38:23.195889 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 00:38:23.195921 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 00:38:23.195941 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 00:38:23.195960 | orchestrator | 2026-03-29 00:38:23.195979 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-29 00:38:23.195997 | orchestrator | Sunday 29 March 2026 00:38:11 +0000 (0:00:01.495) 0:00:13.714 ********** 2026-03-29 00:38:23.196008 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.196019 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:23.196029 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:23.196040 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:23.196051 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:23.196061 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:23.196072 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:23.196083 | orchestrator | 2026-03-29 00:38:23.196094 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-29 00:38:23.196123 | orchestrator | Sunday 29 March 2026 00:38:12 +0000 (0:00:01.129) 0:00:14.844 ********** 2026-03-29 00:38:23.196135 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:38:23.196146 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:23.196156 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:23.196167 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:23.196177 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:23.196188 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:23.196198 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:23.196209 | orchestrator | 2026-03-29 00:38:23.196220 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-29 00:38:23.196230 | orchestrator | Sunday 29 March 2026 00:38:13 +0000 (0:00:00.643) 0:00:15.487 ********** 2026-03-29 00:38:23.196241 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.196252 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:23.196262 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:23.196273 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:23.196284 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:23.196294 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:23.196304 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:23.196315 | orchestrator | 2026-03-29 00:38:23.196326 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-29 00:38:23.196336 | orchestrator | Sunday 29 March 2026 00:38:16 +0000 (0:00:02.687) 0:00:18.175 ********** 2026-03-29 00:38:23.196347 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:23.196358 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:23.196368 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:23.196379 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:23.196389 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:23.196400 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:23.196411 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-29 00:38:23.196424 | orchestrator | 2026-03-29 00:38:23.196435 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-29 00:38:23.196446 | orchestrator | Sunday 29 March 2026 00:38:17 +0000 (0:00:00.873) 0:00:19.048 ********** 2026-03-29 00:38:23.196456 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.196467 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:38:23.196477 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:38:23.196488 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:38:23.196499 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:38:23.196509 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:38:23.196520 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:38:23.196530 | orchestrator | 2026-03-29 00:38:23.196541 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-29 00:38:23.196552 | orchestrator | Sunday 29 March 2026 00:38:18 +0000 (0:00:01.817) 0:00:20.866 ********** 2026-03-29 00:38:23.196563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:38:23.196583 | orchestrator | 2026-03-29 00:38:23.196594 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-29 00:38:23.196605 | orchestrator | Sunday 29 March 2026 00:38:20 +0000 (0:00:01.284) 0:00:22.151 ********** 2026-03-29 00:38:23.196615 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.196626 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:23.196637 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:23.196647 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:23.196658 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:23.196674 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:23.196686 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:23.196696 | orchestrator | 2026-03-29 00:38:23.196707 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-29 00:38:23.196718 | orchestrator | Sunday 29 March 2026 00:38:21 +0000 (0:00:01.162) 0:00:23.314 ********** 2026-03-29 00:38:23.196729 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:23.196739 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:23.196750 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:23.196760 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:23.196771 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:23.196782 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:23.196792 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:23.196803 | orchestrator | 2026-03-29 00:38:23.196814 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-29 00:38:23.196824 | orchestrator | Sunday 29 March 2026 00:38:22 +0000 (0:00:00.628) 0:00:23.942 ********** 2026-03-29 00:38:23.196835 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:38:23.196846 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:38:23.196857 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:38:23.196896 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:38:23.196908 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:38:23.196919 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:38:23.196930 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:38:23.196940 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:38:23.196951 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:38:23.196962 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:38:23.196973 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:38:23.196984 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-29 00:38:23.196994 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:38:23.197006 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-29 00:38:23.197017 | orchestrator | 2026-03-29 00:38:23.197034 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-29 00:38:37.680520 | orchestrator | Sunday 29 March 2026 00:38:23 +0000 (0:00:01.168) 0:00:25.110 ********** 2026-03-29 00:38:37.680661 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:38:37.680691 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:37.680710 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:37.680730 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:37.680747 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:37.680766 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:37.680783 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:37.680802 | orchestrator | 2026-03-29 00:38:37.680824 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-29 00:38:37.680928 | orchestrator | Sunday 29 March 2026 00:38:23 +0000 (0:00:00.567) 0:00:25.678 ********** 2026-03-29 00:38:37.680954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-29 00:38:37.680977 | orchestrator | 2026-03-29 00:38:37.680995 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-29 00:38:37.681014 | orchestrator | Sunday 29 March 2026 00:38:27 +0000 (0:00:03.978) 0:00:29.656 ********** 2026-03-29 00:38:37.681034 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681076 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681190 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681393 | orchestrator | 2026-03-29 00:38:37.681501 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-29 00:38:37.681525 | orchestrator | Sunday 29 March 2026 00:38:32 +0000 (0:00:04.986) 0:00:34.643 ********** 2026-03-29 00:38:37.681545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681561 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681598 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-29 00:38:37.681718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:37.681814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:42.628327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-29 00:38:42.628457 | orchestrator | 2026-03-29 00:38:42.628486 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-29 00:38:42.628509 | orchestrator | Sunday 29 March 2026 00:38:37 +0000 (0:00:04.949) 0:00:39.592 ********** 2026-03-29 00:38:42.628523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:38:42.628535 | orchestrator | 2026-03-29 00:38:42.628546 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-29 00:38:42.628557 | orchestrator | Sunday 29 March 2026 00:38:38 +0000 (0:00:01.020) 0:00:40.612 ********** 2026-03-29 00:38:42.628567 | orchestrator | ok: [testbed-manager] 2026-03-29 00:38:42.628579 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:38:42.628590 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:38:42.628600 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:38:42.628610 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:38:42.628621 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:38:42.628631 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:38:42.628642 | orchestrator | 2026-03-29 00:38:42.628653 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-29 00:38:42.628663 | orchestrator | Sunday 29 March 2026 00:38:39 +0000 (0:00:00.919) 0:00:41.531 ********** 2026-03-29 00:38:42.628674 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:38:42.628686 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:38:42.628696 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:38:42.628707 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:38:42.628718 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:38:42.628728 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:38:42.628739 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:38:42.628749 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:38:42.628760 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:38:42.628771 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:38:42.628782 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:38:42.628812 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:38:42.628823 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:42.628909 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:38:42.628922 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:38:42.628959 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:38:42.628972 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:38:42.628984 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:38:42.628997 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:42.629009 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:38:42.629023 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:38:42.629035 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:38:42.629047 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:38:42.629060 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:42.629073 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:38:42.629086 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:38:42.629098 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:38:42.629111 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:38:42.629123 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:42.629135 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:42.629147 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-29 00:38:42.629159 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-29 00:38:42.629170 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-29 00:38:42.629182 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-29 00:38:42.629194 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:42.629207 | orchestrator | 2026-03-29 00:38:42.629220 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-29 00:38:42.629251 | orchestrator | Sunday 29 March 2026 00:38:41 +0000 (0:00:01.708) 0:00:43.240 ********** 2026-03-29 00:38:42.629265 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:38:42.629285 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:42.629304 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:42.629323 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:42.629343 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:42.629359 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:42.629376 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:42.629396 | orchestrator | 2026-03-29 00:38:42.629415 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-29 00:38:42.629436 | orchestrator | Sunday 29 March 2026 00:38:41 +0000 (0:00:00.542) 0:00:43.783 ********** 2026-03-29 00:38:42.629457 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:38:42.629476 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:38:42.629495 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:38:42.629516 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:38:42.629536 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:38:42.629553 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:38:42.629564 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:38:42.629574 | orchestrator | 2026-03-29 00:38:42.629585 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:38:42.629597 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:38:42.629610 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 00:38:42.629631 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 00:38:42.629642 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 00:38:42.629653 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 00:38:42.629664 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 00:38:42.629674 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 00:38:42.629685 | orchestrator | 2026-03-29 00:38:42.629696 | orchestrator | 2026-03-29 00:38:42.629706 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:38:42.629717 | orchestrator | Sunday 29 March 2026 00:38:42 +0000 (0:00:00.548) 0:00:44.331 ********** 2026-03-29 00:38:42.629728 | orchestrator | =============================================================================== 2026-03-29 00:38:42.629746 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.99s 2026-03-29 00:38:42.629757 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.95s 2026-03-29 00:38:42.629767 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.98s 2026-03-29 00:38:42.629778 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.20s 2026-03-29 00:38:42.629788 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.69s 2026-03-29 00:38:42.629805 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.46s 2026-03-29 00:38:42.629823 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.93s 2026-03-29 00:38:42.629902 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.82s 2026-03-29 00:38:42.629919 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.71s 2026-03-29 00:38:42.629935 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.51s 2026-03-29 00:38:42.629953 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.50s 2026-03-29 00:38:42.629972 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.29s 2026-03-29 00:38:42.629990 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2026-03-29 00:38:42.630008 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2026-03-29 00:38:42.630103 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.13s 2026-03-29 00:38:42.630121 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2026-03-29 00:38:42.630131 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.02s 2026-03-29 00:38:42.630142 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2026-03-29 00:38:42.630153 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.92s 2026-03-29 00:38:42.630163 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2026-03-29 00:38:42.835516 | orchestrator | + osism apply wireguard 2026-03-29 00:38:54.804652 | orchestrator | 2026-03-29 00:38:54 | INFO  | Task 31838de8-2c0f-4223-86a0-89c9432d1896 (wireguard) was prepared for execution. 2026-03-29 00:38:54.804882 | orchestrator | 2026-03-29 00:38:54 | INFO  | It takes a moment until task 31838de8-2c0f-4223-86a0-89c9432d1896 (wireguard) has been started and output is visible here. 2026-03-29 00:39:13.488478 | orchestrator | 2026-03-29 00:39:13.488623 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-29 00:39:13.488696 | orchestrator | 2026-03-29 00:39:13.488718 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-29 00:39:13.488738 | orchestrator | Sunday 29 March 2026 00:38:58 +0000 (0:00:00.198) 0:00:00.198 ********** 2026-03-29 00:39:13.488757 | orchestrator | ok: [testbed-manager] 2026-03-29 00:39:13.488777 | orchestrator | 2026-03-29 00:39:13.488795 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-29 00:39:13.488813 | orchestrator | Sunday 29 March 2026 00:38:59 +0000 (0:00:01.374) 0:00:01.572 ********** 2026-03-29 00:39:13.488831 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:13.488850 | orchestrator | 2026-03-29 00:39:13.488961 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-29 00:39:13.488990 | orchestrator | Sunday 29 March 2026 00:39:06 +0000 (0:00:06.150) 0:00:07.723 ********** 2026-03-29 00:39:13.489009 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:13.489029 | orchestrator | 2026-03-29 00:39:13.489048 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-29 00:39:13.489067 | orchestrator | Sunday 29 March 2026 00:39:06 +0000 (0:00:00.551) 0:00:08.274 ********** 2026-03-29 00:39:13.489085 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:13.489103 | orchestrator | 2026-03-29 00:39:13.489119 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-29 00:39:13.489136 | orchestrator | Sunday 29 March 2026 00:39:07 +0000 (0:00:00.435) 0:00:08.710 ********** 2026-03-29 00:39:13.489153 | orchestrator | ok: [testbed-manager] 2026-03-29 00:39:13.489170 | orchestrator | 2026-03-29 00:39:13.489188 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-29 00:39:13.489207 | orchestrator | Sunday 29 March 2026 00:39:07 +0000 (0:00:00.650) 0:00:09.360 ********** 2026-03-29 00:39:13.489224 | orchestrator | ok: [testbed-manager] 2026-03-29 00:39:13.489243 | orchestrator | 2026-03-29 00:39:13.489280 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-29 00:39:13.489301 | orchestrator | Sunday 29 March 2026 00:39:08 +0000 (0:00:00.410) 0:00:09.770 ********** 2026-03-29 00:39:13.489335 | orchestrator | ok: [testbed-manager] 2026-03-29 00:39:13.489353 | orchestrator | 2026-03-29 00:39:13.489369 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-29 00:39:13.489386 | orchestrator | Sunday 29 March 2026 00:39:08 +0000 (0:00:00.406) 0:00:10.176 ********** 2026-03-29 00:39:13.489400 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:13.489419 | orchestrator | 2026-03-29 00:39:13.489436 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-29 00:39:13.489453 | orchestrator | Sunday 29 March 2026 00:39:09 +0000 (0:00:01.152) 0:00:11.328 ********** 2026-03-29 00:39:13.489472 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-29 00:39:13.489492 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:13.489510 | orchestrator | 2026-03-29 00:39:13.489528 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-29 00:39:13.489547 | orchestrator | Sunday 29 March 2026 00:39:10 +0000 (0:00:00.913) 0:00:12.241 ********** 2026-03-29 00:39:13.489566 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:13.489585 | orchestrator | 2026-03-29 00:39:13.489604 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-29 00:39:13.489623 | orchestrator | Sunday 29 March 2026 00:39:12 +0000 (0:00:01.613) 0:00:13.855 ********** 2026-03-29 00:39:13.489642 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:13.489660 | orchestrator | 2026-03-29 00:39:13.489679 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:39:13.489699 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:39:13.489717 | orchestrator | 2026-03-29 00:39:13.489737 | orchestrator | 2026-03-29 00:39:13.489755 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:39:13.489773 | orchestrator | Sunday 29 March 2026 00:39:13 +0000 (0:00:00.910) 0:00:14.766 ********** 2026-03-29 00:39:13.489814 | orchestrator | =============================================================================== 2026-03-29 00:39:13.489825 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.15s 2026-03-29 00:39:13.489835 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.61s 2026-03-29 00:39:13.489844 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.37s 2026-03-29 00:39:13.489854 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2026-03-29 00:39:13.489863 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2026-03-29 00:39:13.489873 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.91s 2026-03-29 00:39:13.489913 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.65s 2026-03-29 00:39:13.489923 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-03-29 00:39:13.489933 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-03-29 00:39:13.489942 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.41s 2026-03-29 00:39:13.489952 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-03-29 00:39:13.768627 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-29 00:39:13.803857 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-29 00:39:13.804025 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-29 00:39:13.887193 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 180 0 --:--:-- --:--:-- --:--:-- 182 2026-03-29 00:39:13.901941 | orchestrator | + osism apply --environment custom workarounds 2026-03-29 00:39:15.765582 | orchestrator | 2026-03-29 00:39:15 | INFO  | Trying to run play workarounds in environment custom 2026-03-29 00:39:25.904552 | orchestrator | 2026-03-29 00:39:25 | INFO  | Task f0686194-a6c6-45a4-aad5-ee6f225b65c4 (workarounds) was prepared for execution. 2026-03-29 00:39:25.904646 | orchestrator | 2026-03-29 00:39:25 | INFO  | It takes a moment until task f0686194-a6c6-45a4-aad5-ee6f225b65c4 (workarounds) has been started and output is visible here. 2026-03-29 00:39:50.350312 | orchestrator | 2026-03-29 00:39:50.350432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:39:50.350443 | orchestrator | 2026-03-29 00:39:50.350450 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-29 00:39:50.350458 | orchestrator | Sunday 29 March 2026 00:39:29 +0000 (0:00:00.112) 0:00:00.112 ********** 2026-03-29 00:39:50.350466 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-29 00:39:50.350474 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-29 00:39:50.350481 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-29 00:39:50.350488 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-29 00:39:50.350495 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-29 00:39:50.350501 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-29 00:39:50.350508 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-29 00:39:50.350514 | orchestrator | 2026-03-29 00:39:50.350521 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-29 00:39:50.350528 | orchestrator | 2026-03-29 00:39:50.350534 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-29 00:39:50.350541 | orchestrator | Sunday 29 March 2026 00:39:30 +0000 (0:00:00.683) 0:00:00.795 ********** 2026-03-29 00:39:50.350548 | orchestrator | ok: [testbed-manager] 2026-03-29 00:39:50.350557 | orchestrator | 2026-03-29 00:39:50.350588 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-29 00:39:50.350595 | orchestrator | 2026-03-29 00:39:50.350602 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-29 00:39:50.350609 | orchestrator | Sunday 29 March 2026 00:39:32 +0000 (0:00:02.131) 0:00:02.927 ********** 2026-03-29 00:39:50.350616 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:39:50.350623 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:39:50.350629 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:39:50.350636 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:39:50.350642 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:39:50.350649 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:39:50.350656 | orchestrator | 2026-03-29 00:39:50.350663 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-29 00:39:50.350669 | orchestrator | 2026-03-29 00:39:50.350676 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-29 00:39:50.350698 | orchestrator | Sunday 29 March 2026 00:39:34 +0000 (0:00:01.951) 0:00:04.879 ********** 2026-03-29 00:39:50.350707 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:39:50.350716 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:39:50.350723 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:39:50.350729 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:39:50.350736 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:39:50.350743 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-29 00:39:50.350749 | orchestrator | 2026-03-29 00:39:50.350756 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-29 00:39:50.350763 | orchestrator | Sunday 29 March 2026 00:39:35 +0000 (0:00:01.477) 0:00:06.357 ********** 2026-03-29 00:39:50.350769 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:39:50.350776 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:39:50.350783 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:39:50.350790 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:39:50.350796 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:39:50.350803 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:39:50.350809 | orchestrator | 2026-03-29 00:39:50.350816 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-29 00:39:50.350823 | orchestrator | Sunday 29 March 2026 00:39:39 +0000 (0:00:03.825) 0:00:10.183 ********** 2026-03-29 00:39:50.350829 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:39:50.350836 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:39:50.350844 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:39:50.350850 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:39:50.350857 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:39:50.350863 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:39:50.350870 | orchestrator | 2026-03-29 00:39:50.350877 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-29 00:39:50.350883 | orchestrator | 2026-03-29 00:39:50.350890 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-29 00:39:50.350897 | orchestrator | Sunday 29 March 2026 00:39:40 +0000 (0:00:00.645) 0:00:10.829 ********** 2026-03-29 00:39:50.350903 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:39:50.350910 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:39:50.350916 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:39:50.350923 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:39:50.350929 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:39:50.350936 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:50.350943 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:39:50.350955 | orchestrator | 2026-03-29 00:39:50.350962 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-29 00:39:50.350969 | orchestrator | Sunday 29 March 2026 00:39:41 +0000 (0:00:01.512) 0:00:12.341 ********** 2026-03-29 00:39:50.350976 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:39:50.351003 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:39:50.351010 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:39:50.351017 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:39:50.351023 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:39:50.351030 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:39:50.351051 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:50.351058 | orchestrator | 2026-03-29 00:39:50.351065 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-29 00:39:50.351071 | orchestrator | Sunday 29 March 2026 00:39:43 +0000 (0:00:01.535) 0:00:13.877 ********** 2026-03-29 00:39:50.351078 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:39:50.351085 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:39:50.351091 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:39:50.351098 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:39:50.351105 | orchestrator | ok: [testbed-manager] 2026-03-29 00:39:50.351111 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:39:50.351118 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:39:50.351124 | orchestrator | 2026-03-29 00:39:50.351131 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-29 00:39:50.351138 | orchestrator | Sunday 29 March 2026 00:39:45 +0000 (0:00:01.566) 0:00:15.444 ********** 2026-03-29 00:39:50.351145 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:39:50.351151 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:39:50.351158 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:39:50.351164 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:39:50.351171 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:39:50.351177 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:39:50.351184 | orchestrator | changed: [testbed-manager] 2026-03-29 00:39:50.351190 | orchestrator | 2026-03-29 00:39:50.351197 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-29 00:39:50.351204 | orchestrator | Sunday 29 March 2026 00:39:46 +0000 (0:00:01.830) 0:00:17.274 ********** 2026-03-29 00:39:50.351210 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:39:50.351217 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:39:50.351224 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:39:50.351230 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:39:50.351237 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:39:50.351243 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:39:50.351250 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:39:50.351257 | orchestrator | 2026-03-29 00:39:50.351263 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-29 00:39:50.351270 | orchestrator | 2026-03-29 00:39:50.351277 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-29 00:39:50.351283 | orchestrator | Sunday 29 March 2026 00:39:47 +0000 (0:00:00.625) 0:00:17.899 ********** 2026-03-29 00:39:50.351290 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:39:50.351297 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:39:50.351303 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:39:50.351310 | orchestrator | ok: [testbed-manager] 2026-03-29 00:39:50.351316 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:39:50.351323 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:39:50.351334 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:39:50.351341 | orchestrator | 2026-03-29 00:39:50.351348 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:39:50.351355 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:39:50.351363 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:50.351374 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:50.351381 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:50.351388 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:50.351395 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:50.351401 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:39:50.351408 | orchestrator | 2026-03-29 00:39:50.351415 | orchestrator | 2026-03-29 00:39:50.351421 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:39:50.351428 | orchestrator | Sunday 29 March 2026 00:39:50 +0000 (0:00:02.795) 0:00:20.695 ********** 2026-03-29 00:39:50.351435 | orchestrator | =============================================================================== 2026-03-29 00:39:50.351441 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.83s 2026-03-29 00:39:50.351448 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2026-03-29 00:39:50.351455 | orchestrator | Apply netplan configuration --------------------------------------------- 2.13s 2026-03-29 00:39:50.351461 | orchestrator | Apply netplan configuration --------------------------------------------- 1.95s 2026-03-29 00:39:50.351468 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.83s 2026-03-29 00:39:50.351475 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.57s 2026-03-29 00:39:50.351481 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.54s 2026-03-29 00:39:50.351488 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.51s 2026-03-29 00:39:50.351495 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2026-03-29 00:39:50.351501 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.68s 2026-03-29 00:39:50.351508 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-03-29 00:39:50.351518 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-03-29 00:39:50.926499 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-29 00:40:02.987736 | orchestrator | 2026-03-29 00:40:02 | INFO  | Task 47bff164-f94c-47a6-873f-aafe3eccdc02 (reboot) was prepared for execution. 2026-03-29 00:40:02.987874 | orchestrator | 2026-03-29 00:40:02 | INFO  | It takes a moment until task 47bff164-f94c-47a6-873f-aafe3eccdc02 (reboot) has been started and output is visible here. 2026-03-29 00:40:12.434181 | orchestrator | 2026-03-29 00:40:12.434286 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:40:12.434302 | orchestrator | 2026-03-29 00:40:12.434314 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:40:12.434326 | orchestrator | Sunday 29 March 2026 00:40:07 +0000 (0:00:00.204) 0:00:00.204 ********** 2026-03-29 00:40:12.434338 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:40:12.434350 | orchestrator | 2026-03-29 00:40:12.434361 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:40:12.434372 | orchestrator | Sunday 29 March 2026 00:40:07 +0000 (0:00:00.105) 0:00:00.310 ********** 2026-03-29 00:40:12.434383 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:40:12.434394 | orchestrator | 2026-03-29 00:40:12.434405 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:40:12.434443 | orchestrator | Sunday 29 March 2026 00:40:08 +0000 (0:00:00.786) 0:00:01.096 ********** 2026-03-29 00:40:12.434455 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:40:12.434466 | orchestrator | 2026-03-29 00:40:12.434477 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:40:12.434488 | orchestrator | 2026-03-29 00:40:12.434498 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:40:12.434509 | orchestrator | Sunday 29 March 2026 00:40:08 +0000 (0:00:00.100) 0:00:01.196 ********** 2026-03-29 00:40:12.434520 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:40:12.434530 | orchestrator | 2026-03-29 00:40:12.434541 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:40:12.434552 | orchestrator | Sunday 29 March 2026 00:40:08 +0000 (0:00:00.099) 0:00:01.296 ********** 2026-03-29 00:40:12.434563 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:40:12.434573 | orchestrator | 2026-03-29 00:40:12.434584 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:40:12.434608 | orchestrator | Sunday 29 March 2026 00:40:08 +0000 (0:00:00.605) 0:00:01.901 ********** 2026-03-29 00:40:12.434619 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:40:12.434632 | orchestrator | 2026-03-29 00:40:12.434646 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:40:12.434658 | orchestrator | 2026-03-29 00:40:12.434670 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:40:12.434682 | orchestrator | Sunday 29 March 2026 00:40:08 +0000 (0:00:00.097) 0:00:01.999 ********** 2026-03-29 00:40:12.434694 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:40:12.434706 | orchestrator | 2026-03-29 00:40:12.434718 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:40:12.434731 | orchestrator | Sunday 29 March 2026 00:40:09 +0000 (0:00:00.168) 0:00:02.167 ********** 2026-03-29 00:40:12.434742 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:40:12.434754 | orchestrator | 2026-03-29 00:40:12.434768 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:40:12.434780 | orchestrator | Sunday 29 March 2026 00:40:09 +0000 (0:00:00.572) 0:00:02.740 ********** 2026-03-29 00:40:12.434793 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:40:12.434805 | orchestrator | 2026-03-29 00:40:12.434818 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:40:12.434828 | orchestrator | 2026-03-29 00:40:12.434839 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:40:12.434850 | orchestrator | Sunday 29 March 2026 00:40:09 +0000 (0:00:00.111) 0:00:02.852 ********** 2026-03-29 00:40:12.434861 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:40:12.434871 | orchestrator | 2026-03-29 00:40:12.434882 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:40:12.434893 | orchestrator | Sunday 29 March 2026 00:40:09 +0000 (0:00:00.081) 0:00:02.933 ********** 2026-03-29 00:40:12.434903 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:40:12.434914 | orchestrator | 2026-03-29 00:40:12.434925 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:40:12.434935 | orchestrator | Sunday 29 March 2026 00:40:10 +0000 (0:00:00.586) 0:00:03.520 ********** 2026-03-29 00:40:12.434946 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:40:12.434957 | orchestrator | 2026-03-29 00:40:12.434967 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:40:12.434978 | orchestrator | 2026-03-29 00:40:12.434989 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:40:12.434999 | orchestrator | Sunday 29 March 2026 00:40:10 +0000 (0:00:00.103) 0:00:03.624 ********** 2026-03-29 00:40:12.435010 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:40:12.435020 | orchestrator | 2026-03-29 00:40:12.435031 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:40:12.435064 | orchestrator | Sunday 29 March 2026 00:40:10 +0000 (0:00:00.086) 0:00:03.710 ********** 2026-03-29 00:40:12.435084 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:40:12.435094 | orchestrator | 2026-03-29 00:40:12.435105 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:40:12.435116 | orchestrator | Sunday 29 March 2026 00:40:11 +0000 (0:00:00.630) 0:00:04.340 ********** 2026-03-29 00:40:12.435126 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:40:12.435137 | orchestrator | 2026-03-29 00:40:12.435148 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-29 00:40:12.435159 | orchestrator | 2026-03-29 00:40:12.435170 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-29 00:40:12.435181 | orchestrator | Sunday 29 March 2026 00:40:11 +0000 (0:00:00.084) 0:00:04.424 ********** 2026-03-29 00:40:12.435191 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:40:12.435202 | orchestrator | 2026-03-29 00:40:12.435212 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-29 00:40:12.435223 | orchestrator | Sunday 29 March 2026 00:40:11 +0000 (0:00:00.105) 0:00:04.530 ********** 2026-03-29 00:40:12.435234 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:40:12.435244 | orchestrator | 2026-03-29 00:40:12.435255 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-29 00:40:12.435266 | orchestrator | Sunday 29 March 2026 00:40:12 +0000 (0:00:00.659) 0:00:05.189 ********** 2026-03-29 00:40:12.435293 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:40:12.435318 | orchestrator | 2026-03-29 00:40:12.435329 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:40:12.435341 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:40:12.435364 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:40:12.435375 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:40:12.435385 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:40:12.435396 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:40:12.435407 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:40:12.435417 | orchestrator | 2026-03-29 00:40:12.435428 | orchestrator | 2026-03-29 00:40:12.435439 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:40:12.435450 | orchestrator | Sunday 29 March 2026 00:40:12 +0000 (0:00:00.037) 0:00:05.226 ********** 2026-03-29 00:40:12.435466 | orchestrator | =============================================================================== 2026-03-29 00:40:12.435477 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 3.84s 2026-03-29 00:40:12.435488 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.65s 2026-03-29 00:40:12.435499 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2026-03-29 00:40:12.612777 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-29 00:40:24.403665 | orchestrator | 2026-03-29 00:40:24 | INFO  | Task d95cb6e7-afb0-4273-9522-f638a9207344 (wait-for-connection) was prepared for execution. 2026-03-29 00:40:24.403770 | orchestrator | 2026-03-29 00:40:24 | INFO  | It takes a moment until task d95cb6e7-afb0-4273-9522-f638a9207344 (wait-for-connection) has been started and output is visible here. 2026-03-29 00:40:40.546663 | orchestrator | 2026-03-29 00:40:40.546810 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-29 00:40:40.546878 | orchestrator | 2026-03-29 00:40:40.546899 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-29 00:40:40.546918 | orchestrator | Sunday 29 March 2026 00:40:28 +0000 (0:00:00.226) 0:00:00.226 ********** 2026-03-29 00:40:40.546935 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:40:40.546954 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:40:40.546970 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:40:40.546988 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:40:40.547007 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:40:40.547024 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:40:40.547042 | orchestrator | 2026-03-29 00:40:40.547061 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:40:40.547073 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:40:40.547086 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:40:40.547097 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:40:40.547108 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:40:40.547193 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:40:40.547212 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:40:40.547232 | orchestrator | 2026-03-29 00:40:40.547265 | orchestrator | 2026-03-29 00:40:40.547283 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:40:40.547301 | orchestrator | Sunday 29 March 2026 00:40:40 +0000 (0:00:11.626) 0:00:11.853 ********** 2026-03-29 00:40:40.547319 | orchestrator | =============================================================================== 2026-03-29 00:40:40.547337 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2026-03-29 00:40:40.746304 | orchestrator | + osism apply hddtemp 2026-03-29 00:40:52.675240 | orchestrator | 2026-03-29 00:40:52 | INFO  | Task f279ae6f-624f-484f-a42d-786d44a55bc2 (hddtemp) was prepared for execution. 2026-03-29 00:40:52.675339 | orchestrator | 2026-03-29 00:40:52 | INFO  | It takes a moment until task f279ae6f-624f-484f-a42d-786d44a55bc2 (hddtemp) has been started and output is visible here. 2026-03-29 00:41:20.477840 | orchestrator | 2026-03-29 00:41:20.477955 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-29 00:41:20.477973 | orchestrator | 2026-03-29 00:41:20.477985 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-29 00:41:20.477997 | orchestrator | Sunday 29 March 2026 00:40:56 +0000 (0:00:00.268) 0:00:00.268 ********** 2026-03-29 00:41:20.478008 | orchestrator | ok: [testbed-manager] 2026-03-29 00:41:20.478096 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:41:20.478116 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:41:20.478136 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:41:20.478155 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:41:20.478176 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:41:20.478196 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:41:20.478259 | orchestrator | 2026-03-29 00:41:20.478272 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-29 00:41:20.478283 | orchestrator | Sunday 29 March 2026 00:40:57 +0000 (0:00:00.742) 0:00:01.011 ********** 2026-03-29 00:41:20.478296 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:41:20.478337 | orchestrator | 2026-03-29 00:41:20.478350 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-29 00:41:20.478361 | orchestrator | Sunday 29 March 2026 00:40:58 +0000 (0:00:01.212) 0:00:02.223 ********** 2026-03-29 00:41:20.478372 | orchestrator | ok: [testbed-manager] 2026-03-29 00:41:20.478384 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:41:20.478396 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:41:20.478408 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:41:20.478421 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:41:20.478434 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:41:20.478447 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:41:20.478459 | orchestrator | 2026-03-29 00:41:20.478472 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-29 00:41:20.478500 | orchestrator | Sunday 29 March 2026 00:41:00 +0000 (0:00:01.997) 0:00:04.220 ********** 2026-03-29 00:41:20.478513 | orchestrator | changed: [testbed-manager] 2026-03-29 00:41:20.478528 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:41:20.478541 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:41:20.478552 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:41:20.478562 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:41:20.478573 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:41:20.478584 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:41:20.478594 | orchestrator | 2026-03-29 00:41:20.478605 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-29 00:41:20.478616 | orchestrator | Sunday 29 March 2026 00:41:01 +0000 (0:00:01.077) 0:00:05.298 ********** 2026-03-29 00:41:20.478627 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:41:20.478637 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:41:20.478648 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:41:20.478659 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:41:20.478670 | orchestrator | ok: [testbed-manager] 2026-03-29 00:41:20.478680 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:41:20.478691 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:41:20.478702 | orchestrator | 2026-03-29 00:41:20.478712 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-29 00:41:20.478723 | orchestrator | Sunday 29 March 2026 00:41:03 +0000 (0:00:01.100) 0:00:06.398 ********** 2026-03-29 00:41:20.478734 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:41:20.478745 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:41:20.478755 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:41:20.478766 | orchestrator | changed: [testbed-manager] 2026-03-29 00:41:20.478776 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:41:20.478787 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:41:20.478798 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:41:20.478809 | orchestrator | 2026-03-29 00:41:20.478819 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-29 00:41:20.478830 | orchestrator | Sunday 29 March 2026 00:41:03 +0000 (0:00:00.716) 0:00:07.114 ********** 2026-03-29 00:41:20.478841 | orchestrator | changed: [testbed-manager] 2026-03-29 00:41:20.478851 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:41:20.478862 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:41:20.478872 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:41:20.478884 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:41:20.478894 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:41:20.478905 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:41:20.478915 | orchestrator | 2026-03-29 00:41:20.478927 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-29 00:41:20.478938 | orchestrator | Sunday 29 March 2026 00:41:17 +0000 (0:00:13.679) 0:00:20.793 ********** 2026-03-29 00:41:20.478949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:41:20.478960 | orchestrator | 2026-03-29 00:41:20.478978 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-29 00:41:20.478989 | orchestrator | Sunday 29 March 2026 00:41:18 +0000 (0:00:01.075) 0:00:21.869 ********** 2026-03-29 00:41:20.479000 | orchestrator | changed: [testbed-manager] 2026-03-29 00:41:20.479011 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:41:20.479021 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:41:20.479032 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:41:20.479043 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:41:20.479054 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:41:20.479064 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:41:20.479075 | orchestrator | 2026-03-29 00:41:20.479086 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:41:20.479097 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:41:20.479129 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:41:20.479142 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:41:20.479153 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:41:20.479164 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:41:20.479175 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:41:20.479186 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:41:20.479196 | orchestrator | 2026-03-29 00:41:20.479234 | orchestrator | 2026-03-29 00:41:20.479246 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:41:20.479257 | orchestrator | Sunday 29 March 2026 00:41:20 +0000 (0:00:01.735) 0:00:23.605 ********** 2026-03-29 00:41:20.479268 | orchestrator | =============================================================================== 2026-03-29 00:41:20.479279 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.68s 2026-03-29 00:41:20.479289 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.00s 2026-03-29 00:41:20.479301 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.74s 2026-03-29 00:41:20.479317 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2026-03-29 00:41:20.479328 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2026-03-29 00:41:20.479339 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.08s 2026-03-29 00:41:20.479349 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.08s 2026-03-29 00:41:20.479360 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2026-03-29 00:41:20.479371 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.72s 2026-03-29 00:41:20.685200 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-29 00:41:20.724029 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 00:41:20.724172 | orchestrator | + sudo systemctl restart manager.service 2026-03-29 00:41:33.986844 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 00:41:33.987985 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-29 00:41:33.988057 | orchestrator | + local max_attempts=60 2026-03-29 00:41:33.988072 | orchestrator | + local name=ceph-ansible 2026-03-29 00:41:33.988084 | orchestrator | + local attempt_num=1 2026-03-29 00:41:33.988096 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:34.026224 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:34.026354 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:34.026369 | orchestrator | + sleep 5 2026-03-29 00:41:39.030002 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:39.087153 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:39.087209 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:39.087214 | orchestrator | + sleep 5 2026-03-29 00:41:44.090445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:44.122677 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:44.122770 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:44.122785 | orchestrator | + sleep 5 2026-03-29 00:41:49.126636 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:49.166397 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:49.166476 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:49.166488 | orchestrator | + sleep 5 2026-03-29 00:41:54.171093 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:54.207755 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:54.207815 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:54.207822 | orchestrator | + sleep 5 2026-03-29 00:41:59.212531 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:41:59.246656 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:41:59.246754 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:41:59.246770 | orchestrator | + sleep 5 2026-03-29 00:42:04.251217 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:04.287074 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:04.287167 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:42:04.287182 | orchestrator | + sleep 5 2026-03-29 00:42:09.292206 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:09.317489 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:09.317591 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:42:09.317616 | orchestrator | + sleep 5 2026-03-29 00:42:14.320198 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:14.361669 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:14.361786 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:42:14.361813 | orchestrator | + sleep 5 2026-03-29 00:42:19.365663 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:19.400079 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:19.400177 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:42:19.400193 | orchestrator | + sleep 5 2026-03-29 00:42:24.403647 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:24.439312 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:24.439434 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:42:24.439451 | orchestrator | + sleep 5 2026-03-29 00:42:29.442708 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:29.476373 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:29.476462 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:42:29.476477 | orchestrator | + sleep 5 2026-03-29 00:42:34.479503 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:34.514792 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:34.514866 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-29 00:42:34.514875 | orchestrator | + sleep 5 2026-03-29 00:42:39.518803 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-29 00:42:39.549994 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:39.550125 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-29 00:42:39.550137 | orchestrator | + local max_attempts=60 2026-03-29 00:42:39.550145 | orchestrator | + local name=kolla-ansible 2026-03-29 00:42:39.550153 | orchestrator | + local attempt_num=1 2026-03-29 00:42:39.551084 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-29 00:42:39.590410 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:39.590478 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-29 00:42:39.590485 | orchestrator | + local max_attempts=60 2026-03-29 00:42:39.590511 | orchestrator | + local name=osism-ansible 2026-03-29 00:42:39.590516 | orchestrator | + local attempt_num=1 2026-03-29 00:42:39.590901 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-29 00:42:39.622488 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-29 00:42:39.622576 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-29 00:42:39.622590 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-29 00:42:39.770926 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-29 00:42:39.919188 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-29 00:42:40.059053 | orchestrator | ARA in osism-ansible already disabled. 2026-03-29 00:42:40.189901 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-29 00:42:40.190259 | orchestrator | + osism apply gather-facts 2026-03-29 00:42:52.091032 | orchestrator | 2026-03-29 00:42:52 | INFO  | Task 37579bd5-060d-44aa-8b69-be8e9a1600fe (gather-facts) was prepared for execution. 2026-03-29 00:42:52.091137 | orchestrator | 2026-03-29 00:42:52 | INFO  | It takes a moment until task 37579bd5-060d-44aa-8b69-be8e9a1600fe (gather-facts) has been started and output is visible here. 2026-03-29 00:43:04.882617 | orchestrator | 2026-03-29 00:43:04.882701 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:43:04.882709 | orchestrator | 2026-03-29 00:43:04.882714 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:43:04.882719 | orchestrator | Sunday 29 March 2026 00:42:55 +0000 (0:00:00.210) 0:00:00.210 ********** 2026-03-29 00:43:04.882725 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:43:04.882731 | orchestrator | ok: [testbed-manager] 2026-03-29 00:43:04.882736 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:43:04.882741 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:43:04.882745 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:43:04.882750 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:43:04.882754 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:43:04.882759 | orchestrator | 2026-03-29 00:43:04.882764 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 00:43:04.882768 | orchestrator | 2026-03-29 00:43:04.882773 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 00:43:04.882777 | orchestrator | Sunday 29 March 2026 00:43:04 +0000 (0:00:08.274) 0:00:08.485 ********** 2026-03-29 00:43:04.882782 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:43:04.882788 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:43:04.882793 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:43:04.882797 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:43:04.882802 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:04.882806 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:04.882811 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:04.882815 | orchestrator | 2026-03-29 00:43:04.882820 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:43:04.882825 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:43:04.882831 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:43:04.882836 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:43:04.882840 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:43:04.882845 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:43:04.882850 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:43:04.882854 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 00:43:04.882878 | orchestrator | 2026-03-29 00:43:04.882883 | orchestrator | 2026-03-29 00:43:04.882887 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:43:04.882892 | orchestrator | Sunday 29 March 2026 00:43:04 +0000 (0:00:00.439) 0:00:08.924 ********** 2026-03-29 00:43:04.882897 | orchestrator | =============================================================================== 2026-03-29 00:43:04.882901 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.27s 2026-03-29 00:43:04.882906 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2026-03-29 00:43:05.106488 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-29 00:43:05.116513 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-29 00:43:05.126160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-29 00:43:05.142191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-29 00:43:05.154552 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-29 00:43:05.167008 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-29 00:43:05.184910 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-29 00:43:05.196342 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-29 00:43:05.206611 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-29 00:43:05.216815 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-29 00:43:05.226921 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-29 00:43:05.234672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-29 00:43:05.243763 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-29 00:43:05.252632 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-29 00:43:05.261526 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-29 00:43:05.271159 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-29 00:43:05.281952 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-29 00:43:05.289990 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-29 00:43:05.316089 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-29 00:43:05.324765 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-29 00:43:05.332734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-29 00:43:05.340672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-29 00:43:05.353745 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-29 00:43:05.365349 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-29 00:43:05.794055 | orchestrator | ok: Runtime: 0:24:26.028757 2026-03-29 00:43:05.908803 | 2026-03-29 00:43:05.908945 | TASK [Deploy services] 2026-03-29 00:43:06.441902 | orchestrator | skipping: Conditional result was False 2026-03-29 00:43:06.459791 | 2026-03-29 00:43:06.459966 | TASK [Deploy in a nutshell] 2026-03-29 00:43:07.185171 | orchestrator | + set -e 2026-03-29 00:43:07.185302 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 00:43:07.185313 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 00:43:07.185322 | orchestrator | ++ INTERACTIVE=false 2026-03-29 00:43:07.185327 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 00:43:07.185331 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 00:43:07.185337 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 00:43:07.185359 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 00:43:07.185370 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 00:43:07.185375 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 00:43:07.185382 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 00:43:07.185386 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 00:43:07.185393 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 00:43:07.185397 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 00:43:07.185406 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 00:43:07.185410 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 00:43:07.185438 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 00:43:07.185442 | orchestrator | ++ export ARA=false 2026-03-29 00:43:07.185446 | orchestrator | ++ ARA=false 2026-03-29 00:43:07.185450 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 00:43:07.185456 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 00:43:07.185459 | orchestrator | ++ export TEMPEST=true 2026-03-29 00:43:07.185463 | orchestrator | ++ TEMPEST=true 2026-03-29 00:43:07.185467 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 00:43:07.185471 | orchestrator | ++ IS_ZUUL=true 2026-03-29 00:43:07.185475 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 00:43:07.185479 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 00:43:07.185483 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 00:43:07.185486 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 00:43:07.185490 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 00:43:07.185502 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 00:43:07.185506 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 00:43:07.185510 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 00:43:07.185514 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 00:43:07.185518 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 00:43:07.185522 | orchestrator | 2026-03-29 00:43:07.185526 | orchestrator | # PULL IMAGES 2026-03-29 00:43:07.185530 | orchestrator | 2026-03-29 00:43:07.185534 | orchestrator | + echo 2026-03-29 00:43:07.185538 | orchestrator | + echo '# PULL IMAGES' 2026-03-29 00:43:07.185541 | orchestrator | + echo 2026-03-29 00:43:07.186028 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-29 00:43:07.246407 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-29 00:43:07.246562 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-29 00:43:08.949368 | orchestrator | 2026-03-29 00:43:08 | INFO  | Trying to run play pull-images in environment custom 2026-03-29 00:43:19.090769 | orchestrator | 2026-03-29 00:43:19 | INFO  | Task c4719f81-f082-4671-ad66-44ec2ef2ea80 (pull-images) was prepared for execution. 2026-03-29 00:43:19.090950 | orchestrator | 2026-03-29 00:43:19 | INFO  | Task c4719f81-f082-4671-ad66-44ec2ef2ea80 is running in background. No more output. Check ARA for logs. 2026-03-29 00:43:21.104722 | orchestrator | 2026-03-29 00:43:21 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-29 00:43:31.256785 | orchestrator | 2026-03-29 00:43:31 | INFO  | Task 9ed58eca-307b-475f-819c-ba604861c2da (wipe-partitions) was prepared for execution. 2026-03-29 00:43:31.256921 | orchestrator | 2026-03-29 00:43:31 | INFO  | It takes a moment until task 9ed58eca-307b-475f-819c-ba604861c2da (wipe-partitions) has been started and output is visible here. 2026-03-29 00:43:43.502355 | orchestrator | 2026-03-29 00:43:43.502450 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-29 00:43:43.502465 | orchestrator | 2026-03-29 00:43:43.502507 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-29 00:43:43.502520 | orchestrator | Sunday 29 March 2026 00:43:35 +0000 (0:00:00.133) 0:00:00.133 ********** 2026-03-29 00:43:43.502531 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:43:43.502540 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:43:43.502549 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:43:43.502558 | orchestrator | 2026-03-29 00:43:43.502567 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-29 00:43:43.502600 | orchestrator | Sunday 29 March 2026 00:43:35 +0000 (0:00:00.658) 0:00:00.791 ********** 2026-03-29 00:43:43.502606 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:43.502612 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:43.502617 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:43.502625 | orchestrator | 2026-03-29 00:43:43.502631 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-29 00:43:43.502636 | orchestrator | Sunday 29 March 2026 00:43:36 +0000 (0:00:00.380) 0:00:01.172 ********** 2026-03-29 00:43:43.502642 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:43:43.502647 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:43:43.502652 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:43:43.502658 | orchestrator | 2026-03-29 00:43:43.502663 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-29 00:43:43.502668 | orchestrator | Sunday 29 March 2026 00:43:36 +0000 (0:00:00.597) 0:00:01.769 ********** 2026-03-29 00:43:43.502673 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:43:43.502679 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:43:43.502684 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:43:43.502689 | orchestrator | 2026-03-29 00:43:43.502694 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-29 00:43:43.502699 | orchestrator | Sunday 29 March 2026 00:43:37 +0000 (0:00:00.257) 0:00:02.027 ********** 2026-03-29 00:43:43.502705 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 00:43:43.502713 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 00:43:43.502718 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 00:43:43.502723 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 00:43:43.502728 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 00:43:43.502733 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 00:43:43.502738 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 00:43:43.502743 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 00:43:43.502748 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 00:43:43.502753 | orchestrator | 2026-03-29 00:43:43.502758 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-29 00:43:43.502764 | orchestrator | Sunday 29 March 2026 00:43:38 +0000 (0:00:01.239) 0:00:03.267 ********** 2026-03-29 00:43:43.502769 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 00:43:43.502774 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 00:43:43.502780 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 00:43:43.502785 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 00:43:43.502790 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 00:43:43.502795 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 00:43:43.502800 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 00:43:43.502805 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 00:43:43.502810 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 00:43:43.502815 | orchestrator | 2026-03-29 00:43:43.502820 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-29 00:43:43.502825 | orchestrator | Sunday 29 March 2026 00:43:39 +0000 (0:00:01.489) 0:00:04.757 ********** 2026-03-29 00:43:43.502830 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-29 00:43:43.502835 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-29 00:43:43.502840 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-29 00:43:43.502845 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-29 00:43:43.502850 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-29 00:43:43.502860 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-29 00:43:43.502865 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-29 00:43:43.502870 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-29 00:43:43.502880 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-29 00:43:43.502886 | orchestrator | 2026-03-29 00:43:43.502892 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-29 00:43:43.502898 | orchestrator | Sunday 29 March 2026 00:43:41 +0000 (0:00:02.162) 0:00:06.919 ********** 2026-03-29 00:43:43.502904 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:43:43.502910 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:43:43.502915 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:43:43.502921 | orchestrator | 2026-03-29 00:43:43.502926 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-29 00:43:43.502932 | orchestrator | Sunday 29 March 2026 00:43:42 +0000 (0:00:00.638) 0:00:07.558 ********** 2026-03-29 00:43:43.502938 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:43:43.502944 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:43:43.502949 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:43:43.502955 | orchestrator | 2026-03-29 00:43:43.502961 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:43:43.502968 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:43:43.502976 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:43:43.502996 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:43:43.503003 | orchestrator | 2026-03-29 00:43:43.503009 | orchestrator | 2026-03-29 00:43:43.503014 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:43:43.503020 | orchestrator | Sunday 29 March 2026 00:43:43 +0000 (0:00:00.707) 0:00:08.266 ********** 2026-03-29 00:43:43.503026 | orchestrator | =============================================================================== 2026-03-29 00:43:43.503032 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.16s 2026-03-29 00:43:43.503037 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.49s 2026-03-29 00:43:43.503043 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2026-03-29 00:43:43.503050 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2026-03-29 00:43:43.503055 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.66s 2026-03-29 00:43:43.503061 | orchestrator | Reload udev rules ------------------------------------------------------- 0.64s 2026-03-29 00:43:43.503067 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.60s 2026-03-29 00:43:43.503073 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-03-29 00:43:43.503079 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-03-29 00:43:55.540365 | orchestrator | 2026-03-29 00:43:55 | INFO  | Task ba5f0d46-f3ee-4ae6-a540-aea92ba91ee3 (facts) was prepared for execution. 2026-03-29 00:43:55.540466 | orchestrator | 2026-03-29 00:43:55 | INFO  | It takes a moment until task ba5f0d46-f3ee-4ae6-a540-aea92ba91ee3 (facts) has been started and output is visible here. 2026-03-29 00:44:07.111951 | orchestrator | 2026-03-29 00:44:07.112046 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 00:44:07.112058 | orchestrator | 2026-03-29 00:44:07.112068 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 00:44:07.112079 | orchestrator | Sunday 29 March 2026 00:43:59 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-03-29 00:44:07.112088 | orchestrator | ok: [testbed-manager] 2026-03-29 00:44:07.112099 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:44:07.112108 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:44:07.112118 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:44:07.112151 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:07.112161 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:44:07.112170 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:44:07.112179 | orchestrator | 2026-03-29 00:44:07.112191 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 00:44:07.112201 | orchestrator | Sunday 29 March 2026 00:44:00 +0000 (0:00:01.068) 0:00:01.328 ********** 2026-03-29 00:44:07.112210 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:44:07.112221 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:44:07.112231 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:44:07.112241 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:44:07.112250 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:07.112260 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:07.112269 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:07.112277 | orchestrator | 2026-03-29 00:44:07.112286 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:44:07.112295 | orchestrator | 2026-03-29 00:44:07.112304 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:44:07.112312 | orchestrator | Sunday 29 March 2026 00:44:01 +0000 (0:00:01.064) 0:00:02.392 ********** 2026-03-29 00:44:07.112322 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:44:07.112331 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:44:07.112339 | orchestrator | ok: [testbed-manager] 2026-03-29 00:44:07.112349 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:44:07.112358 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:07.112367 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:44:07.112376 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:44:07.112384 | orchestrator | 2026-03-29 00:44:07.112393 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 00:44:07.112403 | orchestrator | 2026-03-29 00:44:07.112412 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 00:44:07.112438 | orchestrator | Sunday 29 March 2026 00:44:06 +0000 (0:00:04.767) 0:00:07.160 ********** 2026-03-29 00:44:07.112448 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:44:07.112457 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:44:07.112466 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:44:07.112475 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:44:07.112483 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:07.112489 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:07.112494 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:07.112500 | orchestrator | 2026-03-29 00:44:07.112505 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:44:07.112534 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:44:07.112542 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:44:07.112548 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:44:07.112555 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:44:07.112561 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:44:07.112567 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:44:07.112574 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:44:07.112580 | orchestrator | 2026-03-29 00:44:07.112586 | orchestrator | 2026-03-29 00:44:07.112593 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:44:07.112606 | orchestrator | Sunday 29 March 2026 00:44:06 +0000 (0:00:00.438) 0:00:07.598 ********** 2026-03-29 00:44:07.112612 | orchestrator | =============================================================================== 2026-03-29 00:44:07.112619 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.77s 2026-03-29 00:44:07.112628 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.07s 2026-03-29 00:44:07.112637 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2026-03-29 00:44:07.112646 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2026-03-29 00:44:09.134086 | orchestrator | 2026-03-29 00:44:09 | INFO  | Task 505e0ff0-82a3-4b67-af4d-ea28693082b0 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-29 00:44:09.134173 | orchestrator | 2026-03-29 00:44:09 | INFO  | It takes a moment until task 505e0ff0-82a3-4b67-af4d-ea28693082b0 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-29 00:44:19.619873 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 00:44:19.620032 | orchestrator | 2.16.14 2026-03-29 00:44:19.620061 | orchestrator | 2026-03-29 00:44:19.620083 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 00:44:19.620102 | orchestrator | 2026-03-29 00:44:19.620124 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:44:19.620143 | orchestrator | Sunday 29 March 2026 00:44:13 +0000 (0:00:00.298) 0:00:00.298 ********** 2026-03-29 00:44:19.620162 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:19.620184 | orchestrator | 2026-03-29 00:44:19.620203 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:44:19.620223 | orchestrator | Sunday 29 March 2026 00:44:13 +0000 (0:00:00.228) 0:00:00.527 ********** 2026-03-29 00:44:19.620242 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:19.620258 | orchestrator | 2026-03-29 00:44:19.620270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620281 | orchestrator | Sunday 29 March 2026 00:44:13 +0000 (0:00:00.219) 0:00:00.747 ********** 2026-03-29 00:44:19.620292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:44:19.620304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:44:19.620314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:44:19.620325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:44:19.620336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:44:19.620347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:44:19.620360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:44:19.620372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:44:19.620385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-29 00:44:19.620396 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:44:19.620420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:44:19.620433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:44:19.620445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:44:19.620457 | orchestrator | 2026-03-29 00:44:19.620469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620482 | orchestrator | Sunday 29 March 2026 00:44:14 +0000 (0:00:00.405) 0:00:01.152 ********** 2026-03-29 00:44:19.620524 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620571 | orchestrator | 2026-03-29 00:44:19.620584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620597 | orchestrator | Sunday 29 March 2026 00:44:14 +0000 (0:00:00.204) 0:00:01.357 ********** 2026-03-29 00:44:19.620608 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620618 | orchestrator | 2026-03-29 00:44:19.620629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620640 | orchestrator | Sunday 29 March 2026 00:44:14 +0000 (0:00:00.176) 0:00:01.534 ********** 2026-03-29 00:44:19.620651 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620662 | orchestrator | 2026-03-29 00:44:19.620672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620683 | orchestrator | Sunday 29 March 2026 00:44:14 +0000 (0:00:00.191) 0:00:01.725 ********** 2026-03-29 00:44:19.620699 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620711 | orchestrator | 2026-03-29 00:44:19.620722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620732 | orchestrator | Sunday 29 March 2026 00:44:14 +0000 (0:00:00.175) 0:00:01.901 ********** 2026-03-29 00:44:19.620743 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620754 | orchestrator | 2026-03-29 00:44:19.620765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620776 | orchestrator | Sunday 29 March 2026 00:44:14 +0000 (0:00:00.185) 0:00:02.087 ********** 2026-03-29 00:44:19.620787 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620798 | orchestrator | 2026-03-29 00:44:19.620809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620820 | orchestrator | Sunday 29 March 2026 00:44:15 +0000 (0:00:00.185) 0:00:02.273 ********** 2026-03-29 00:44:19.620830 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620841 | orchestrator | 2026-03-29 00:44:19.620852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620863 | orchestrator | Sunday 29 March 2026 00:44:15 +0000 (0:00:00.195) 0:00:02.468 ********** 2026-03-29 00:44:19.620874 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.620885 | orchestrator | 2026-03-29 00:44:19.620895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620906 | orchestrator | Sunday 29 March 2026 00:44:15 +0000 (0:00:00.188) 0:00:02.656 ********** 2026-03-29 00:44:19.620918 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a) 2026-03-29 00:44:19.620931 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a) 2026-03-29 00:44:19.620941 | orchestrator | 2026-03-29 00:44:19.620952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.620987 | orchestrator | Sunday 29 March 2026 00:44:15 +0000 (0:00:00.437) 0:00:03.094 ********** 2026-03-29 00:44:19.620999 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551) 2026-03-29 00:44:19.621010 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551) 2026-03-29 00:44:19.621021 | orchestrator | 2026-03-29 00:44:19.621032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.621043 | orchestrator | Sunday 29 March 2026 00:44:16 +0000 (0:00:00.532) 0:00:03.627 ********** 2026-03-29 00:44:19.621053 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf) 2026-03-29 00:44:19.621064 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf) 2026-03-29 00:44:19.621075 | orchestrator | 2026-03-29 00:44:19.621086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.621097 | orchestrator | Sunday 29 March 2026 00:44:17 +0000 (0:00:00.531) 0:00:04.158 ********** 2026-03-29 00:44:19.621128 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c) 2026-03-29 00:44:19.621145 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c) 2026-03-29 00:44:19.621162 | orchestrator | 2026-03-29 00:44:19.621181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:19.621199 | orchestrator | Sunday 29 March 2026 00:44:17 +0000 (0:00:00.662) 0:00:04.820 ********** 2026-03-29 00:44:19.621217 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:44:19.621235 | orchestrator | 2026-03-29 00:44:19.621261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621280 | orchestrator | Sunday 29 March 2026 00:44:18 +0000 (0:00:00.303) 0:00:05.123 ********** 2026-03-29 00:44:19.621298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:44:19.621316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:44:19.621333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:44:19.621351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:44:19.621369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:44:19.621387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:44:19.621406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:44:19.621425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:44:19.621440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-29 00:44:19.621450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:44:19.621461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:44:19.621472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:44:19.621482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:44:19.621493 | orchestrator | 2026-03-29 00:44:19.621505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621516 | orchestrator | Sunday 29 March 2026 00:44:18 +0000 (0:00:00.352) 0:00:05.476 ********** 2026-03-29 00:44:19.621527 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.621569 | orchestrator | 2026-03-29 00:44:19.621581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621593 | orchestrator | Sunday 29 March 2026 00:44:18 +0000 (0:00:00.185) 0:00:05.661 ********** 2026-03-29 00:44:19.621603 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.621614 | orchestrator | 2026-03-29 00:44:19.621625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621635 | orchestrator | Sunday 29 March 2026 00:44:18 +0000 (0:00:00.182) 0:00:05.844 ********** 2026-03-29 00:44:19.621646 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.621657 | orchestrator | 2026-03-29 00:44:19.621668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621679 | orchestrator | Sunday 29 March 2026 00:44:18 +0000 (0:00:00.183) 0:00:06.028 ********** 2026-03-29 00:44:19.621690 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.621700 | orchestrator | 2026-03-29 00:44:19.621711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621722 | orchestrator | Sunday 29 March 2026 00:44:19 +0000 (0:00:00.169) 0:00:06.198 ********** 2026-03-29 00:44:19.621743 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.621754 | orchestrator | 2026-03-29 00:44:19.621765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621775 | orchestrator | Sunday 29 March 2026 00:44:19 +0000 (0:00:00.151) 0:00:06.350 ********** 2026-03-29 00:44:19.621786 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.621797 | orchestrator | 2026-03-29 00:44:19.621808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:19.621819 | orchestrator | Sunday 29 March 2026 00:44:19 +0000 (0:00:00.178) 0:00:06.528 ********** 2026-03-29 00:44:19.621830 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:19.621840 | orchestrator | 2026-03-29 00:44:19.621861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:26.227636 | orchestrator | Sunday 29 March 2026 00:44:19 +0000 (0:00:00.185) 0:00:06.714 ********** 2026-03-29 00:44:26.227720 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.227731 | orchestrator | 2026-03-29 00:44:26.227739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:26.227747 | orchestrator | Sunday 29 March 2026 00:44:19 +0000 (0:00:00.179) 0:00:06.893 ********** 2026-03-29 00:44:26.227754 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-29 00:44:26.227761 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-29 00:44:26.227769 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-29 00:44:26.227775 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-29 00:44:26.227782 | orchestrator | 2026-03-29 00:44:26.227790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:26.227796 | orchestrator | Sunday 29 March 2026 00:44:20 +0000 (0:00:00.845) 0:00:07.739 ********** 2026-03-29 00:44:26.227803 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.227810 | orchestrator | 2026-03-29 00:44:26.227816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:26.227823 | orchestrator | Sunday 29 March 2026 00:44:20 +0000 (0:00:00.195) 0:00:07.934 ********** 2026-03-29 00:44:26.227830 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.227836 | orchestrator | 2026-03-29 00:44:26.227843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:26.227850 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.185) 0:00:08.120 ********** 2026-03-29 00:44:26.227856 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.227863 | orchestrator | 2026-03-29 00:44:26.227870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:26.227877 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.173) 0:00:08.293 ********** 2026-03-29 00:44:26.227883 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.227890 | orchestrator | 2026-03-29 00:44:26.227897 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 00:44:26.227904 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.199) 0:00:08.493 ********** 2026-03-29 00:44:26.227911 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-29 00:44:26.227917 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-29 00:44:26.227924 | orchestrator | 2026-03-29 00:44:26.227946 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 00:44:26.227953 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.151) 0:00:08.645 ********** 2026-03-29 00:44:26.227960 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.227967 | orchestrator | 2026-03-29 00:44:26.227974 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 00:44:26.227981 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.122) 0:00:08.767 ********** 2026-03-29 00:44:26.227987 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.227994 | orchestrator | 2026-03-29 00:44:26.228001 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 00:44:26.228008 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.136) 0:00:08.904 ********** 2026-03-29 00:44:26.228030 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228038 | orchestrator | 2026-03-29 00:44:26.228044 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 00:44:26.228051 | orchestrator | Sunday 29 March 2026 00:44:21 +0000 (0:00:00.134) 0:00:09.038 ********** 2026-03-29 00:44:26.228058 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:26.228065 | orchestrator | 2026-03-29 00:44:26.228072 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 00:44:26.228079 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.138) 0:00:09.176 ********** 2026-03-29 00:44:26.228086 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ec951f8f-e82d-5973-b083-619786b6a4a7'}}) 2026-03-29 00:44:26.228093 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}}) 2026-03-29 00:44:26.228100 | orchestrator | 2026-03-29 00:44:26.228107 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 00:44:26.228114 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.165) 0:00:09.342 ********** 2026-03-29 00:44:26.228122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ec951f8f-e82d-5973-b083-619786b6a4a7'}})  2026-03-29 00:44:26.228134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}})  2026-03-29 00:44:26.228141 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228147 | orchestrator | 2026-03-29 00:44:26.228154 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 00:44:26.228161 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.153) 0:00:09.496 ********** 2026-03-29 00:44:26.228169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ec951f8f-e82d-5973-b083-619786b6a4a7'}})  2026-03-29 00:44:26.228177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}})  2026-03-29 00:44:26.228184 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228192 | orchestrator | 2026-03-29 00:44:26.228200 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 00:44:26.228207 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.358) 0:00:09.855 ********** 2026-03-29 00:44:26.228215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ec951f8f-e82d-5973-b083-619786b6a4a7'}})  2026-03-29 00:44:26.228296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}})  2026-03-29 00:44:26.228308 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228316 | orchestrator | 2026-03-29 00:44:26.228326 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 00:44:26.228344 | orchestrator | Sunday 29 March 2026 00:44:22 +0000 (0:00:00.153) 0:00:10.008 ********** 2026-03-29 00:44:26.228356 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:26.228367 | orchestrator | 2026-03-29 00:44:26.228376 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 00:44:26.228384 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.143) 0:00:10.151 ********** 2026-03-29 00:44:26.228391 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:44:26.228398 | orchestrator | 2026-03-29 00:44:26.228406 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 00:44:26.228414 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.140) 0:00:10.292 ********** 2026-03-29 00:44:26.228421 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228428 | orchestrator | 2026-03-29 00:44:26.228436 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 00:44:26.228444 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.135) 0:00:10.427 ********** 2026-03-29 00:44:26.228460 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228468 | orchestrator | 2026-03-29 00:44:26.228475 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 00:44:26.228483 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.130) 0:00:10.557 ********** 2026-03-29 00:44:26.228490 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228498 | orchestrator | 2026-03-29 00:44:26.228504 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 00:44:26.228511 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.112) 0:00:10.670 ********** 2026-03-29 00:44:26.228518 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:44:26.228525 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:44:26.228531 | orchestrator |  "sdb": { 2026-03-29 00:44:26.228538 | orchestrator |  "osd_lvm_uuid": "ec951f8f-e82d-5973-b083-619786b6a4a7" 2026-03-29 00:44:26.228603 | orchestrator |  }, 2026-03-29 00:44:26.228610 | orchestrator |  "sdc": { 2026-03-29 00:44:26.228617 | orchestrator |  "osd_lvm_uuid": "fb9b884b-e3c0-524d-8e95-f889faf8bdb8" 2026-03-29 00:44:26.228623 | orchestrator |  } 2026-03-29 00:44:26.228630 | orchestrator |  } 2026-03-29 00:44:26.228637 | orchestrator | } 2026-03-29 00:44:26.228643 | orchestrator | 2026-03-29 00:44:26.228650 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 00:44:26.228657 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.112) 0:00:10.782 ********** 2026-03-29 00:44:26.228663 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228670 | orchestrator | 2026-03-29 00:44:26.228676 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 00:44:26.228683 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.115) 0:00:10.897 ********** 2026-03-29 00:44:26.228690 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228696 | orchestrator | 2026-03-29 00:44:26.228703 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 00:44:26.228709 | orchestrator | Sunday 29 March 2026 00:44:23 +0000 (0:00:00.127) 0:00:11.025 ********** 2026-03-29 00:44:26.228716 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:44:26.228723 | orchestrator | 2026-03-29 00:44:26.228729 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 00:44:26.228736 | orchestrator | Sunday 29 March 2026 00:44:24 +0000 (0:00:00.117) 0:00:11.142 ********** 2026-03-29 00:44:26.228742 | orchestrator | changed: [testbed-node-3] => { 2026-03-29 00:44:26.228749 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 00:44:26.228755 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:44:26.228762 | orchestrator |  "sdb": { 2026-03-29 00:44:26.228769 | orchestrator |  "osd_lvm_uuid": "ec951f8f-e82d-5973-b083-619786b6a4a7" 2026-03-29 00:44:26.228775 | orchestrator |  }, 2026-03-29 00:44:26.228782 | orchestrator |  "sdc": { 2026-03-29 00:44:26.228789 | orchestrator |  "osd_lvm_uuid": "fb9b884b-e3c0-524d-8e95-f889faf8bdb8" 2026-03-29 00:44:26.228795 | orchestrator |  } 2026-03-29 00:44:26.228802 | orchestrator |  }, 2026-03-29 00:44:26.228808 | orchestrator |  "lvm_volumes": [ 2026-03-29 00:44:26.228815 | orchestrator |  { 2026-03-29 00:44:26.228822 | orchestrator |  "data": "osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7", 2026-03-29 00:44:26.228829 | orchestrator |  "data_vg": "ceph-ec951f8f-e82d-5973-b083-619786b6a4a7" 2026-03-29 00:44:26.228835 | orchestrator |  }, 2026-03-29 00:44:26.228842 | orchestrator |  { 2026-03-29 00:44:26.228849 | orchestrator |  "data": "osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8", 2026-03-29 00:44:26.228855 | orchestrator |  "data_vg": "ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8" 2026-03-29 00:44:26.228867 | orchestrator |  } 2026-03-29 00:44:26.228874 | orchestrator |  ] 2026-03-29 00:44:26.228880 | orchestrator |  } 2026-03-29 00:44:26.228887 | orchestrator | } 2026-03-29 00:44:26.228899 | orchestrator | 2026-03-29 00:44:26.228906 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 00:44:26.228912 | orchestrator | Sunday 29 March 2026 00:44:24 +0000 (0:00:00.325) 0:00:11.468 ********** 2026-03-29 00:44:26.228919 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:26.228925 | orchestrator | 2026-03-29 00:44:26.228932 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 00:44:26.228939 | orchestrator | 2026-03-29 00:44:26.228945 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:44:26.228952 | orchestrator | Sunday 29 March 2026 00:44:25 +0000 (0:00:01.430) 0:00:12.898 ********** 2026-03-29 00:44:26.228958 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:26.228965 | orchestrator | 2026-03-29 00:44:26.228971 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:44:26.228978 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.223) 0:00:13.122 ********** 2026-03-29 00:44:26.228984 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:44:26.228991 | orchestrator | 2026-03-29 00:44:26.229004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632415 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.203) 0:00:13.326 ********** 2026-03-29 00:44:32.632526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:44:32.632538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:44:32.632546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:44:32.632605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:44:32.632613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:44:32.632620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:44:32.632628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:44:32.632635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:44:32.632642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-29 00:44:32.632648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:44:32.632655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:44:32.632662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:44:32.632673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:44:32.632680 | orchestrator | 2026-03-29 00:44:32.632689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632695 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.349) 0:00:13.675 ********** 2026-03-29 00:44:32.632702 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632710 | orchestrator | 2026-03-29 00:44:32.632716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632722 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.200) 0:00:13.876 ********** 2026-03-29 00:44:32.632729 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632736 | orchestrator | 2026-03-29 00:44:32.632742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632748 | orchestrator | Sunday 29 March 2026 00:44:26 +0000 (0:00:00.167) 0:00:14.043 ********** 2026-03-29 00:44:32.632755 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632761 | orchestrator | 2026-03-29 00:44:32.632767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632774 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.171) 0:00:14.215 ********** 2026-03-29 00:44:32.632806 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632813 | orchestrator | 2026-03-29 00:44:32.632820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632826 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.155) 0:00:14.371 ********** 2026-03-29 00:44:32.632833 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632839 | orchestrator | 2026-03-29 00:44:32.632845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632851 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.425) 0:00:14.796 ********** 2026-03-29 00:44:32.632857 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632864 | orchestrator | 2026-03-29 00:44:32.632889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632896 | orchestrator | Sunday 29 March 2026 00:44:27 +0000 (0:00:00.171) 0:00:14.968 ********** 2026-03-29 00:44:32.632903 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632909 | orchestrator | 2026-03-29 00:44:32.632916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632922 | orchestrator | Sunday 29 March 2026 00:44:28 +0000 (0:00:00.172) 0:00:15.140 ********** 2026-03-29 00:44:32.632929 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.632936 | orchestrator | 2026-03-29 00:44:32.632943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632950 | orchestrator | Sunday 29 March 2026 00:44:28 +0000 (0:00:00.176) 0:00:15.317 ********** 2026-03-29 00:44:32.632957 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28) 2026-03-29 00:44:32.632966 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28) 2026-03-29 00:44:32.632972 | orchestrator | 2026-03-29 00:44:32.632979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.632985 | orchestrator | Sunday 29 March 2026 00:44:28 +0000 (0:00:00.311) 0:00:15.628 ********** 2026-03-29 00:44:32.632991 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c) 2026-03-29 00:44:32.632998 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c) 2026-03-29 00:44:32.633005 | orchestrator | 2026-03-29 00:44:32.633011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.633018 | orchestrator | Sunday 29 March 2026 00:44:28 +0000 (0:00:00.380) 0:00:16.009 ********** 2026-03-29 00:44:32.633024 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d) 2026-03-29 00:44:32.633031 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d) 2026-03-29 00:44:32.633038 | orchestrator | 2026-03-29 00:44:32.633045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.633070 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.319) 0:00:16.328 ********** 2026-03-29 00:44:32.633077 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53) 2026-03-29 00:44:32.633084 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53) 2026-03-29 00:44:32.633090 | orchestrator | 2026-03-29 00:44:32.633097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:32.633105 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.368) 0:00:16.697 ********** 2026-03-29 00:44:32.633111 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:44:32.633118 | orchestrator | 2026-03-29 00:44:32.633125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633131 | orchestrator | Sunday 29 March 2026 00:44:29 +0000 (0:00:00.294) 0:00:16.991 ********** 2026-03-29 00:44:32.633138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:44:32.633151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:44:32.633157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:44:32.633164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:44:32.633170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:44:32.633177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:44:32.633183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:44:32.633190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:44:32.633196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-29 00:44:32.633203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:44:32.633210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:44:32.633216 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:44:32.633223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:44:32.633229 | orchestrator | 2026-03-29 00:44:32.633236 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633242 | orchestrator | Sunday 29 March 2026 00:44:30 +0000 (0:00:00.276) 0:00:17.268 ********** 2026-03-29 00:44:32.633249 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633255 | orchestrator | 2026-03-29 00:44:32.633262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633274 | orchestrator | Sunday 29 March 2026 00:44:30 +0000 (0:00:00.426) 0:00:17.694 ********** 2026-03-29 00:44:32.633280 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633287 | orchestrator | 2026-03-29 00:44:32.633294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633300 | orchestrator | Sunday 29 March 2026 00:44:30 +0000 (0:00:00.142) 0:00:17.836 ********** 2026-03-29 00:44:32.633307 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633313 | orchestrator | 2026-03-29 00:44:32.633320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633326 | orchestrator | Sunday 29 March 2026 00:44:30 +0000 (0:00:00.139) 0:00:17.976 ********** 2026-03-29 00:44:32.633332 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633339 | orchestrator | 2026-03-29 00:44:32.633345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633351 | orchestrator | Sunday 29 March 2026 00:44:31 +0000 (0:00:00.154) 0:00:18.131 ********** 2026-03-29 00:44:32.633358 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633364 | orchestrator | 2026-03-29 00:44:32.633370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633376 | orchestrator | Sunday 29 March 2026 00:44:31 +0000 (0:00:00.135) 0:00:18.267 ********** 2026-03-29 00:44:32.633383 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633389 | orchestrator | 2026-03-29 00:44:32.633395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633401 | orchestrator | Sunday 29 March 2026 00:44:31 +0000 (0:00:00.179) 0:00:18.446 ********** 2026-03-29 00:44:32.633408 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633414 | orchestrator | 2026-03-29 00:44:32.633420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633427 | orchestrator | Sunday 29 March 2026 00:44:31 +0000 (0:00:00.199) 0:00:18.645 ********** 2026-03-29 00:44:32.633433 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:32.633444 | orchestrator | 2026-03-29 00:44:32.633450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633456 | orchestrator | Sunday 29 March 2026 00:44:31 +0000 (0:00:00.180) 0:00:18.826 ********** 2026-03-29 00:44:32.633463 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-29 00:44:32.633470 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-29 00:44:32.633476 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-29 00:44:32.633483 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-29 00:44:32.633489 | orchestrator | 2026-03-29 00:44:32.633495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:32.633501 | orchestrator | Sunday 29 March 2026 00:44:32 +0000 (0:00:00.723) 0:00:19.549 ********** 2026-03-29 00:44:32.633508 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.858981 | orchestrator | 2026-03-29 00:44:38.859158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:38.859189 | orchestrator | Sunday 29 March 2026 00:44:32 +0000 (0:00:00.181) 0:00:19.730 ********** 2026-03-29 00:44:38.859343 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.859363 | orchestrator | 2026-03-29 00:44:38.859380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:38.859397 | orchestrator | Sunday 29 March 2026 00:44:32 +0000 (0:00:00.181) 0:00:19.911 ********** 2026-03-29 00:44:38.859414 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.859431 | orchestrator | 2026-03-29 00:44:38.859444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:38.859455 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.230) 0:00:20.142 ********** 2026-03-29 00:44:38.859466 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.859476 | orchestrator | 2026-03-29 00:44:38.859600 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 00:44:38.859621 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.473) 0:00:20.615 ********** 2026-03-29 00:44:38.859637 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-29 00:44:38.859654 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-29 00:44:38.859672 | orchestrator | 2026-03-29 00:44:38.859687 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 00:44:38.859704 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.178) 0:00:20.794 ********** 2026-03-29 00:44:38.859721 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.859737 | orchestrator | 2026-03-29 00:44:38.859754 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 00:44:38.859771 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.117) 0:00:20.911 ********** 2026-03-29 00:44:38.859836 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.859854 | orchestrator | 2026-03-29 00:44:38.859869 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 00:44:38.859885 | orchestrator | Sunday 29 March 2026 00:44:33 +0000 (0:00:00.127) 0:00:21.038 ********** 2026-03-29 00:44:38.859902 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.859918 | orchestrator | 2026-03-29 00:44:38.859934 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 00:44:38.860103 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.113) 0:00:21.152 ********** 2026-03-29 00:44:38.860127 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:44:38.860145 | orchestrator | 2026-03-29 00:44:38.860161 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 00:44:38.860176 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.127) 0:00:21.280 ********** 2026-03-29 00:44:38.860192 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00df2b4e-a360-5652-a277-e346f3e9f535'}}) 2026-03-29 00:44:38.860211 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35a0cf9a-662c-5baf-94a5-8e3a66aae069'}}) 2026-03-29 00:44:38.860260 | orchestrator | 2026-03-29 00:44:38.860277 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 00:44:38.860294 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.194) 0:00:21.474 ********** 2026-03-29 00:44:38.860311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00df2b4e-a360-5652-a277-e346f3e9f535'}})  2026-03-29 00:44:38.860350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35a0cf9a-662c-5baf-94a5-8e3a66aae069'}})  2026-03-29 00:44:38.860369 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.860385 | orchestrator | 2026-03-29 00:44:38.860401 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 00:44:38.860416 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.169) 0:00:21.644 ********** 2026-03-29 00:44:38.860432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00df2b4e-a360-5652-a277-e346f3e9f535'}})  2026-03-29 00:44:38.860448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35a0cf9a-662c-5baf-94a5-8e3a66aae069'}})  2026-03-29 00:44:38.860465 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.860481 | orchestrator | 2026-03-29 00:44:38.860497 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 00:44:38.860514 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.162) 0:00:21.806 ********** 2026-03-29 00:44:38.860529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00df2b4e-a360-5652-a277-e346f3e9f535'}})  2026-03-29 00:44:38.860547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35a0cf9a-662c-5baf-94a5-8e3a66aae069'}})  2026-03-29 00:44:38.860589 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.860606 | orchestrator | 2026-03-29 00:44:38.860621 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 00:44:38.860637 | orchestrator | Sunday 29 March 2026 00:44:34 +0000 (0:00:00.158) 0:00:21.964 ********** 2026-03-29 00:44:38.860653 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:44:38.860666 | orchestrator | 2026-03-29 00:44:38.860678 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 00:44:38.860694 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.143) 0:00:22.108 ********** 2026-03-29 00:44:38.860710 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:44:38.860726 | orchestrator | 2026-03-29 00:44:38.860743 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 00:44:38.860759 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.126) 0:00:22.235 ********** 2026-03-29 00:44:38.860854 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.860875 | orchestrator | 2026-03-29 00:44:38.860891 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 00:44:38.860907 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.342) 0:00:22.578 ********** 2026-03-29 00:44:38.860924 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.860939 | orchestrator | 2026-03-29 00:44:38.860956 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 00:44:38.860971 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.170) 0:00:22.749 ********** 2026-03-29 00:44:38.861026 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.861088 | orchestrator | 2026-03-29 00:44:38.861105 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 00:44:38.861118 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.126) 0:00:22.875 ********** 2026-03-29 00:44:38.861128 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:44:38.861138 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:44:38.861147 | orchestrator |  "sdb": { 2026-03-29 00:44:38.861157 | orchestrator |  "osd_lvm_uuid": "00df2b4e-a360-5652-a277-e346f3e9f535" 2026-03-29 00:44:38.861167 | orchestrator |  }, 2026-03-29 00:44:38.861189 | orchestrator |  "sdc": { 2026-03-29 00:44:38.861199 | orchestrator |  "osd_lvm_uuid": "35a0cf9a-662c-5baf-94a5-8e3a66aae069" 2026-03-29 00:44:38.861209 | orchestrator |  } 2026-03-29 00:44:38.861218 | orchestrator |  } 2026-03-29 00:44:38.861228 | orchestrator | } 2026-03-29 00:44:38.861308 | orchestrator | 2026-03-29 00:44:38.861326 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 00:44:38.861343 | orchestrator | Sunday 29 March 2026 00:44:35 +0000 (0:00:00.162) 0:00:23.038 ********** 2026-03-29 00:44:38.861359 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.861375 | orchestrator | 2026-03-29 00:44:38.861391 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 00:44:38.861407 | orchestrator | Sunday 29 March 2026 00:44:36 +0000 (0:00:00.115) 0:00:23.153 ********** 2026-03-29 00:44:38.861425 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.861442 | orchestrator | 2026-03-29 00:44:38.861452 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 00:44:38.861462 | orchestrator | Sunday 29 March 2026 00:44:36 +0000 (0:00:00.113) 0:00:23.267 ********** 2026-03-29 00:44:38.861625 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:44:38.861645 | orchestrator | 2026-03-29 00:44:38.861654 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 00:44:38.861664 | orchestrator | Sunday 29 March 2026 00:44:36 +0000 (0:00:00.141) 0:00:23.408 ********** 2026-03-29 00:44:38.861674 | orchestrator | changed: [testbed-node-4] => { 2026-03-29 00:44:38.861683 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 00:44:38.861693 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:44:38.861703 | orchestrator |  "sdb": { 2026-03-29 00:44:38.861712 | orchestrator |  "osd_lvm_uuid": "00df2b4e-a360-5652-a277-e346f3e9f535" 2026-03-29 00:44:38.861722 | orchestrator |  }, 2026-03-29 00:44:38.861731 | orchestrator |  "sdc": { 2026-03-29 00:44:38.861741 | orchestrator |  "osd_lvm_uuid": "35a0cf9a-662c-5baf-94a5-8e3a66aae069" 2026-03-29 00:44:38.861751 | orchestrator |  } 2026-03-29 00:44:38.861760 | orchestrator |  }, 2026-03-29 00:44:38.861770 | orchestrator |  "lvm_volumes": [ 2026-03-29 00:44:38.861778 | orchestrator |  { 2026-03-29 00:44:38.861786 | orchestrator |  "data": "osd-block-00df2b4e-a360-5652-a277-e346f3e9f535", 2026-03-29 00:44:38.861794 | orchestrator |  "data_vg": "ceph-00df2b4e-a360-5652-a277-e346f3e9f535" 2026-03-29 00:44:38.861802 | orchestrator |  }, 2026-03-29 00:44:38.861809 | orchestrator |  { 2026-03-29 00:44:38.861817 | orchestrator |  "data": "osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069", 2026-03-29 00:44:38.861825 | orchestrator |  "data_vg": "ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069" 2026-03-29 00:44:38.861833 | orchestrator |  } 2026-03-29 00:44:38.861845 | orchestrator |  ] 2026-03-29 00:44:38.861857 | orchestrator |  } 2026-03-29 00:44:38.861865 | orchestrator | } 2026-03-29 00:44:38.861873 | orchestrator | 2026-03-29 00:44:38.861880 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 00:44:38.861888 | orchestrator | Sunday 29 March 2026 00:44:36 +0000 (0:00:00.197) 0:00:23.606 ********** 2026-03-29 00:44:38.861896 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:38.861904 | orchestrator | 2026-03-29 00:44:38.861912 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-29 00:44:38.861919 | orchestrator | 2026-03-29 00:44:38.861927 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:44:38.861935 | orchestrator | Sunday 29 March 2026 00:44:37 +0000 (0:00:01.068) 0:00:24.674 ********** 2026-03-29 00:44:38.861943 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:38.861951 | orchestrator | 2026-03-29 00:44:38.861958 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:44:38.861984 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.609) 0:00:25.284 ********** 2026-03-29 00:44:38.861993 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:44:38.862000 | orchestrator | 2026-03-29 00:44:38.862008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:38.862065 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.204) 0:00:25.489 ********** 2026-03-29 00:44:38.862076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:44:38.862084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:44:38.862092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:44:38.862099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:44:38.862107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:44:38.862127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:44:46.625163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:44:46.625248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:44:46.625260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-29 00:44:46.625271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:44:46.625280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:44:46.625289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:44:46.625297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:44:46.625307 | orchestrator | 2026-03-29 00:44:46.625317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625328 | orchestrator | Sunday 29 March 2026 00:44:38 +0000 (0:00:00.461) 0:00:25.951 ********** 2026-03-29 00:44:46.625337 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625348 | orchestrator | 2026-03-29 00:44:46.625357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625366 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.267) 0:00:26.218 ********** 2026-03-29 00:44:46.625375 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625384 | orchestrator | 2026-03-29 00:44:46.625392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625401 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.229) 0:00:26.448 ********** 2026-03-29 00:44:46.625409 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625418 | orchestrator | 2026-03-29 00:44:46.625427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625436 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.242) 0:00:26.690 ********** 2026-03-29 00:44:46.625446 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625455 | orchestrator | 2026-03-29 00:44:46.625464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625473 | orchestrator | Sunday 29 March 2026 00:44:39 +0000 (0:00:00.200) 0:00:26.891 ********** 2026-03-29 00:44:46.625483 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625493 | orchestrator | 2026-03-29 00:44:46.625503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625513 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.214) 0:00:27.106 ********** 2026-03-29 00:44:46.625522 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625532 | orchestrator | 2026-03-29 00:44:46.625541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625551 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.214) 0:00:27.320 ********** 2026-03-29 00:44:46.625635 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625642 | orchestrator | 2026-03-29 00:44:46.625648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625654 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.209) 0:00:27.530 ********** 2026-03-29 00:44:46.625659 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625665 | orchestrator | 2026-03-29 00:44:46.625670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625676 | orchestrator | Sunday 29 March 2026 00:44:40 +0000 (0:00:00.198) 0:00:27.729 ********** 2026-03-29 00:44:46.625682 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8) 2026-03-29 00:44:46.625689 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8) 2026-03-29 00:44:46.625694 | orchestrator | 2026-03-29 00:44:46.625700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625705 | orchestrator | Sunday 29 March 2026 00:44:41 +0000 (0:00:00.870) 0:00:28.599 ********** 2026-03-29 00:44:46.625711 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41) 2026-03-29 00:44:46.625717 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41) 2026-03-29 00:44:46.625724 | orchestrator | 2026-03-29 00:44:46.625730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625736 | orchestrator | Sunday 29 March 2026 00:44:41 +0000 (0:00:00.450) 0:00:29.049 ********** 2026-03-29 00:44:46.625742 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89) 2026-03-29 00:44:46.625748 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89) 2026-03-29 00:44:46.625754 | orchestrator | 2026-03-29 00:44:46.625761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625767 | orchestrator | Sunday 29 March 2026 00:44:42 +0000 (0:00:00.448) 0:00:29.498 ********** 2026-03-29 00:44:46.625773 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c) 2026-03-29 00:44:46.625779 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c) 2026-03-29 00:44:46.625785 | orchestrator | 2026-03-29 00:44:46.625791 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:44:46.625797 | orchestrator | Sunday 29 March 2026 00:44:42 +0000 (0:00:00.433) 0:00:29.931 ********** 2026-03-29 00:44:46.625803 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:44:46.625809 | orchestrator | 2026-03-29 00:44:46.625815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.625836 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.318) 0:00:30.249 ********** 2026-03-29 00:44:46.625843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:44:46.625849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:44:46.625856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:44:46.625862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:44:46.625868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:44:46.625887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:44:46.625894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:44:46.625900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:44:46.625911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-29 00:44:46.625917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:44:46.625923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:44:46.625929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:44:46.625935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:44:46.625942 | orchestrator | 2026-03-29 00:44:46.625948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.625954 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.356) 0:00:30.606 ********** 2026-03-29 00:44:46.625960 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625966 | orchestrator | 2026-03-29 00:44:46.625972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.625978 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.265) 0:00:30.871 ********** 2026-03-29 00:44:46.625985 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.625991 | orchestrator | 2026-03-29 00:44:46.625998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626007 | orchestrator | Sunday 29 March 2026 00:44:43 +0000 (0:00:00.189) 0:00:31.060 ********** 2026-03-29 00:44:46.626103 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626116 | orchestrator | 2026-03-29 00:44:46.626125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626134 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.161) 0:00:31.222 ********** 2026-03-29 00:44:46.626142 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626151 | orchestrator | 2026-03-29 00:44:46.626161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626171 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.147) 0:00:31.369 ********** 2026-03-29 00:44:46.626180 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626190 | orchestrator | 2026-03-29 00:44:46.626198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626207 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.142) 0:00:31.512 ********** 2026-03-29 00:44:46.626216 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626224 | orchestrator | 2026-03-29 00:44:46.626234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626243 | orchestrator | Sunday 29 March 2026 00:44:44 +0000 (0:00:00.421) 0:00:31.934 ********** 2026-03-29 00:44:46.626252 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626260 | orchestrator | 2026-03-29 00:44:46.626269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626278 | orchestrator | Sunday 29 March 2026 00:44:45 +0000 (0:00:00.180) 0:00:32.114 ********** 2026-03-29 00:44:46.626287 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626293 | orchestrator | 2026-03-29 00:44:46.626298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626304 | orchestrator | Sunday 29 March 2026 00:44:45 +0000 (0:00:00.207) 0:00:32.321 ********** 2026-03-29 00:44:46.626309 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-29 00:44:46.626315 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-29 00:44:46.626320 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-29 00:44:46.626326 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-29 00:44:46.626331 | orchestrator | 2026-03-29 00:44:46.626337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626342 | orchestrator | Sunday 29 March 2026 00:44:45 +0000 (0:00:00.663) 0:00:32.985 ********** 2026-03-29 00:44:46.626347 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626353 | orchestrator | 2026-03-29 00:44:46.626365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626370 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.200) 0:00:33.186 ********** 2026-03-29 00:44:46.626376 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626381 | orchestrator | 2026-03-29 00:44:46.626387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626395 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.177) 0:00:33.364 ********** 2026-03-29 00:44:46.626404 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626413 | orchestrator | 2026-03-29 00:44:46.626421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:44:46.626431 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.179) 0:00:33.543 ********** 2026-03-29 00:44:46.626439 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:46.626448 | orchestrator | 2026-03-29 00:44:46.626466 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-29 00:44:50.595184 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.174) 0:00:33.718 ********** 2026-03-29 00:44:50.595317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-29 00:44:50.595330 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-29 00:44:50.595340 | orchestrator | 2026-03-29 00:44:50.595352 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-29 00:44:50.595362 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.144) 0:00:33.862 ********** 2026-03-29 00:44:50.595372 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595382 | orchestrator | 2026-03-29 00:44:50.595391 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-29 00:44:50.595401 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.109) 0:00:33.972 ********** 2026-03-29 00:44:50.595410 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595419 | orchestrator | 2026-03-29 00:44:50.595428 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-29 00:44:50.595438 | orchestrator | Sunday 29 March 2026 00:44:46 +0000 (0:00:00.109) 0:00:34.081 ********** 2026-03-29 00:44:50.595447 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595456 | orchestrator | 2026-03-29 00:44:50.595465 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-29 00:44:50.595475 | orchestrator | Sunday 29 March 2026 00:44:47 +0000 (0:00:00.286) 0:00:34.367 ********** 2026-03-29 00:44:50.595484 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:44:50.595495 | orchestrator | 2026-03-29 00:44:50.595504 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-29 00:44:50.595515 | orchestrator | Sunday 29 March 2026 00:44:47 +0000 (0:00:00.165) 0:00:34.533 ********** 2026-03-29 00:44:50.595525 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '687a2d88-e62e-55f7-9995-e7b8ae522292'}}) 2026-03-29 00:44:50.595535 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}}) 2026-03-29 00:44:50.595545 | orchestrator | 2026-03-29 00:44:50.595554 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-29 00:44:50.595564 | orchestrator | Sunday 29 March 2026 00:44:47 +0000 (0:00:00.152) 0:00:34.685 ********** 2026-03-29 00:44:50.595574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '687a2d88-e62e-55f7-9995-e7b8ae522292'}})  2026-03-29 00:44:50.595607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}})  2026-03-29 00:44:50.595617 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595626 | orchestrator | 2026-03-29 00:44:50.595636 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-29 00:44:50.595646 | orchestrator | Sunday 29 March 2026 00:44:47 +0000 (0:00:00.145) 0:00:34.830 ********** 2026-03-29 00:44:50.595655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '687a2d88-e62e-55f7-9995-e7b8ae522292'}})  2026-03-29 00:44:50.595697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}})  2026-03-29 00:44:50.595709 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595719 | orchestrator | 2026-03-29 00:44:50.595728 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-29 00:44:50.595738 | orchestrator | Sunday 29 March 2026 00:44:47 +0000 (0:00:00.159) 0:00:34.990 ********** 2026-03-29 00:44:50.595771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '687a2d88-e62e-55f7-9995-e7b8ae522292'}})  2026-03-29 00:44:50.595781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}})  2026-03-29 00:44:50.595791 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595800 | orchestrator | 2026-03-29 00:44:50.595810 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-29 00:44:50.595820 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.151) 0:00:35.141 ********** 2026-03-29 00:44:50.595829 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:44:50.595838 | orchestrator | 2026-03-29 00:44:50.595847 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-29 00:44:50.595856 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.135) 0:00:35.276 ********** 2026-03-29 00:44:50.595866 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:44:50.595875 | orchestrator | 2026-03-29 00:44:50.595885 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-29 00:44:50.595894 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.129) 0:00:35.405 ********** 2026-03-29 00:44:50.595904 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595913 | orchestrator | 2026-03-29 00:44:50.595923 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-29 00:44:50.595932 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.132) 0:00:35.537 ********** 2026-03-29 00:44:50.595941 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595951 | orchestrator | 2026-03-29 00:44:50.595960 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-29 00:44:50.595969 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.120) 0:00:35.658 ********** 2026-03-29 00:44:50.595979 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.595988 | orchestrator | 2026-03-29 00:44:50.595997 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-29 00:44:50.596007 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.141) 0:00:35.799 ********** 2026-03-29 00:44:50.596016 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:44:50.596025 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:44:50.596034 | orchestrator |  "sdb": { 2026-03-29 00:44:50.596065 | orchestrator |  "osd_lvm_uuid": "687a2d88-e62e-55f7-9995-e7b8ae522292" 2026-03-29 00:44:50.596076 | orchestrator |  }, 2026-03-29 00:44:50.596086 | orchestrator |  "sdc": { 2026-03-29 00:44:50.596096 | orchestrator |  "osd_lvm_uuid": "b95a2846-f14f-5a7d-ae9e-15318cf5fdef" 2026-03-29 00:44:50.596106 | orchestrator |  } 2026-03-29 00:44:50.596116 | orchestrator |  } 2026-03-29 00:44:50.596125 | orchestrator | } 2026-03-29 00:44:50.596135 | orchestrator | 2026-03-29 00:44:50.596144 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-29 00:44:50.596153 | orchestrator | Sunday 29 March 2026 00:44:48 +0000 (0:00:00.130) 0:00:35.929 ********** 2026-03-29 00:44:50.596163 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.596172 | orchestrator | 2026-03-29 00:44:50.596181 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-29 00:44:50.596191 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.264) 0:00:36.194 ********** 2026-03-29 00:44:50.596200 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.596218 | orchestrator | 2026-03-29 00:44:50.596228 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-29 00:44:50.596237 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.109) 0:00:36.303 ********** 2026-03-29 00:44:50.596246 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:44:50.596255 | orchestrator | 2026-03-29 00:44:50.596265 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-29 00:44:50.596274 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.130) 0:00:36.434 ********** 2026-03-29 00:44:50.596283 | orchestrator | changed: [testbed-node-5] => { 2026-03-29 00:44:50.596294 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-29 00:44:50.596304 | orchestrator |  "ceph_osd_devices": { 2026-03-29 00:44:50.596314 | orchestrator |  "sdb": { 2026-03-29 00:44:50.596324 | orchestrator |  "osd_lvm_uuid": "687a2d88-e62e-55f7-9995-e7b8ae522292" 2026-03-29 00:44:50.596334 | orchestrator |  }, 2026-03-29 00:44:50.596345 | orchestrator |  "sdc": { 2026-03-29 00:44:50.596355 | orchestrator |  "osd_lvm_uuid": "b95a2846-f14f-5a7d-ae9e-15318cf5fdef" 2026-03-29 00:44:50.596365 | orchestrator |  } 2026-03-29 00:44:50.596375 | orchestrator |  }, 2026-03-29 00:44:50.596385 | orchestrator |  "lvm_volumes": [ 2026-03-29 00:44:50.596395 | orchestrator |  { 2026-03-29 00:44:50.596404 | orchestrator |  "data": "osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292", 2026-03-29 00:44:50.596413 | orchestrator |  "data_vg": "ceph-687a2d88-e62e-55f7-9995-e7b8ae522292" 2026-03-29 00:44:50.596423 | orchestrator |  }, 2026-03-29 00:44:50.596433 | orchestrator |  { 2026-03-29 00:44:50.596442 | orchestrator |  "data": "osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef", 2026-03-29 00:44:50.596452 | orchestrator |  "data_vg": "ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef" 2026-03-29 00:44:50.596461 | orchestrator |  } 2026-03-29 00:44:50.596471 | orchestrator |  ] 2026-03-29 00:44:50.596486 | orchestrator |  } 2026-03-29 00:44:50.596496 | orchestrator | } 2026-03-29 00:44:50.596505 | orchestrator | 2026-03-29 00:44:50.596515 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-29 00:44:50.596524 | orchestrator | Sunday 29 March 2026 00:44:49 +0000 (0:00:00.201) 0:00:36.636 ********** 2026-03-29 00:44:50.596533 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 00:44:50.596542 | orchestrator | 2026-03-29 00:44:50.596551 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:44:50.596561 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 00:44:50.596571 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 00:44:50.596630 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 00:44:50.596642 | orchestrator | 2026-03-29 00:44:50.596652 | orchestrator | 2026-03-29 00:44:50.596661 | orchestrator | 2026-03-29 00:44:50.596670 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:44:50.596680 | orchestrator | Sunday 29 March 2026 00:44:50 +0000 (0:00:01.038) 0:00:37.674 ********** 2026-03-29 00:44:50.596689 | orchestrator | =============================================================================== 2026-03-29 00:44:50.596698 | orchestrator | Write configuration file ------------------------------------------------ 3.54s 2026-03-29 00:44:50.596707 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-03-29 00:44:50.596715 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.06s 2026-03-29 00:44:50.596721 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-03-29 00:44:50.596736 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-03-29 00:44:50.596741 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-03-29 00:44:50.596747 | orchestrator | Print configuration data ------------------------------------------------ 0.72s 2026-03-29 00:44:50.596753 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-03-29 00:44:50.596759 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2026-03-29 00:44:50.596764 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-03-29 00:44:50.596770 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-29 00:44:50.596776 | orchestrator | Get initial list of available block devices ----------------------------- 0.63s 2026-03-29 00:44:50.596782 | orchestrator | Set DB devices config data ---------------------------------------------- 0.61s 2026-03-29 00:44:50.596795 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.53s 2026-03-29 00:44:50.991027 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-03-29 00:44:50.991123 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-03-29 00:44:50.991129 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.51s 2026-03-29 00:44:50.991135 | orchestrator | Print WAL devices ------------------------------------------------------- 0.49s 2026-03-29 00:44:50.991140 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.47s 2026-03-29 00:44:50.991145 | orchestrator | Add known partitions to the list of available block devices ------------- 0.47s 2026-03-29 00:45:14.002218 | orchestrator | 2026-03-29 00:45:13 | INFO  | Task 3a5cd451-8ae8-4ef7-a4a0-bdcfec1d66b9 (sync inventory) is running in background. Output coming soon. 2026-03-29 00:45:41.271154 | orchestrator | 2026-03-29 00:45:15 | INFO  | Starting group_vars file reorganization 2026-03-29 00:45:41.271236 | orchestrator | 2026-03-29 00:45:15 | INFO  | Moved 0 file(s) to their respective directories 2026-03-29 00:45:41.271258 | orchestrator | 2026-03-29 00:45:15 | INFO  | Group_vars file reorganization completed 2026-03-29 00:45:41.271267 | orchestrator | 2026-03-29 00:45:18 | INFO  | Starting variable preparation from inventory 2026-03-29 00:45:41.271275 | orchestrator | 2026-03-29 00:45:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-29 00:45:41.271283 | orchestrator | 2026-03-29 00:45:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-29 00:45:41.271310 | orchestrator | 2026-03-29 00:45:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-29 00:45:41.271319 | orchestrator | 2026-03-29 00:45:21 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-29 00:45:41.271327 | orchestrator | 2026-03-29 00:45:21 | INFO  | Variable preparation completed 2026-03-29 00:45:41.271335 | orchestrator | 2026-03-29 00:45:23 | INFO  | Starting inventory overwrite handling 2026-03-29 00:45:41.271343 | orchestrator | 2026-03-29 00:45:23 | INFO  | Handling group overwrites in 99-overwrite 2026-03-29 00:45:41.271354 | orchestrator | 2026-03-29 00:45:23 | INFO  | Removing group frr:children from 60-generic 2026-03-29 00:45:41.271362 | orchestrator | 2026-03-29 00:45:23 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-29 00:45:41.271369 | orchestrator | 2026-03-29 00:45:23 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-29 00:45:41.271377 | orchestrator | 2026-03-29 00:45:23 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-29 00:45:41.271385 | orchestrator | 2026-03-29 00:45:23 | INFO  | Handling group overwrites in 20-roles 2026-03-29 00:45:41.271392 | orchestrator | 2026-03-29 00:45:23 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-29 00:45:41.271473 | orchestrator | 2026-03-29 00:45:23 | INFO  | Removed 5 group(s) in total 2026-03-29 00:45:41.271481 | orchestrator | 2026-03-29 00:45:23 | INFO  | Inventory overwrite handling completed 2026-03-29 00:45:41.271488 | orchestrator | 2026-03-29 00:45:24 | INFO  | Starting merge of inventory files 2026-03-29 00:45:41.271495 | orchestrator | 2026-03-29 00:45:24 | INFO  | Inventory files merged successfully 2026-03-29 00:45:41.271502 | orchestrator | 2026-03-29 00:45:29 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-29 00:45:41.271510 | orchestrator | 2026-03-29 00:45:40 | INFO  | Successfully wrote ClusterShell configuration 2026-03-29 00:45:41.271517 | orchestrator | [master e8ab41a] 2026-03-29-00-45 2026-03-29 00:45:41.271525 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-29 00:45:43.265375 | orchestrator | 2026-03-29 00:45:43 | INFO  | Task 36960f4b-25a7-4c28-beb3-fac923ca5568 (ceph-create-lvm-devices) was prepared for execution. 2026-03-29 00:45:43.265441 | orchestrator | 2026-03-29 00:45:43 | INFO  | It takes a moment until task 36960f4b-25a7-4c28-beb3-fac923ca5568 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-29 00:45:53.933489 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 00:45:53.933621 | orchestrator | 2.16.14 2026-03-29 00:45:53.933650 | orchestrator | 2026-03-29 00:45:53.933672 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 00:45:53.933728 | orchestrator | 2026-03-29 00:45:53.933751 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:45:53.933770 | orchestrator | Sunday 29 March 2026 00:45:47 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-03-29 00:45:53.933790 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 00:45:53.933810 | orchestrator | 2026-03-29 00:45:53.933830 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:45:53.933848 | orchestrator | Sunday 29 March 2026 00:45:47 +0000 (0:00:00.238) 0:00:00.544 ********** 2026-03-29 00:45:53.933866 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:45:53.933884 | orchestrator | 2026-03-29 00:45:53.933903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.933924 | orchestrator | Sunday 29 March 2026 00:45:47 +0000 (0:00:00.211) 0:00:00.756 ********** 2026-03-29 00:45:53.933943 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:45:53.933985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:45:53.934007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:45:53.934111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:45:53.934160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:45:53.934180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:45:53.934199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:45:53.934217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:45:53.934237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-29 00:45:53.934255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:45:53.934274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:45:53.934292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:45:53.934315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:45:53.934370 | orchestrator | 2026-03-29 00:45:53.934389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934426 | orchestrator | Sunday 29 March 2026 00:45:48 +0000 (0:00:00.483) 0:00:01.239 ********** 2026-03-29 00:45:53.934445 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934463 | orchestrator | 2026-03-29 00:45:53.934482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934500 | orchestrator | Sunday 29 March 2026 00:45:48 +0000 (0:00:00.224) 0:00:01.464 ********** 2026-03-29 00:45:53.934518 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934535 | orchestrator | 2026-03-29 00:45:53.934554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934573 | orchestrator | Sunday 29 March 2026 00:45:48 +0000 (0:00:00.208) 0:00:01.672 ********** 2026-03-29 00:45:53.934592 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934610 | orchestrator | 2026-03-29 00:45:53.934628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934648 | orchestrator | Sunday 29 March 2026 00:45:48 +0000 (0:00:00.174) 0:00:01.847 ********** 2026-03-29 00:45:53.934659 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934670 | orchestrator | 2026-03-29 00:45:53.934681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934716 | orchestrator | Sunday 29 March 2026 00:45:48 +0000 (0:00:00.197) 0:00:02.044 ********** 2026-03-29 00:45:53.934728 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934739 | orchestrator | 2026-03-29 00:45:53.934749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934760 | orchestrator | Sunday 29 March 2026 00:45:49 +0000 (0:00:00.174) 0:00:02.219 ********** 2026-03-29 00:45:53.934771 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934782 | orchestrator | 2026-03-29 00:45:53.934792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934803 | orchestrator | Sunday 29 March 2026 00:45:49 +0000 (0:00:00.182) 0:00:02.402 ********** 2026-03-29 00:45:53.934814 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934825 | orchestrator | 2026-03-29 00:45:53.934836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934847 | orchestrator | Sunday 29 March 2026 00:45:49 +0000 (0:00:00.183) 0:00:02.585 ********** 2026-03-29 00:45:53.934863 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.934879 | orchestrator | 2026-03-29 00:45:53.934891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934902 | orchestrator | Sunday 29 March 2026 00:45:49 +0000 (0:00:00.193) 0:00:02.779 ********** 2026-03-29 00:45:53.934913 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a) 2026-03-29 00:45:53.934925 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a) 2026-03-29 00:45:53.934936 | orchestrator | 2026-03-29 00:45:53.934947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.934982 | orchestrator | Sunday 29 March 2026 00:45:50 +0000 (0:00:00.419) 0:00:03.198 ********** 2026-03-29 00:45:53.934994 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551) 2026-03-29 00:45:53.935004 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551) 2026-03-29 00:45:53.935015 | orchestrator | 2026-03-29 00:45:53.935031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.935049 | orchestrator | Sunday 29 March 2026 00:45:50 +0000 (0:00:00.566) 0:00:03.765 ********** 2026-03-29 00:45:53.935074 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf) 2026-03-29 00:45:53.935098 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf) 2026-03-29 00:45:53.935132 | orchestrator | 2026-03-29 00:45:53.935149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.935166 | orchestrator | Sunday 29 March 2026 00:45:51 +0000 (0:00:00.543) 0:00:04.309 ********** 2026-03-29 00:45:53.935183 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c) 2026-03-29 00:45:53.935218 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c) 2026-03-29 00:45:53.935235 | orchestrator | 2026-03-29 00:45:53.935253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:45:53.935287 | orchestrator | Sunday 29 March 2026 00:45:51 +0000 (0:00:00.697) 0:00:05.006 ********** 2026-03-29 00:45:53.935307 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:45:53.935325 | orchestrator | 2026-03-29 00:45:53.935344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.935361 | orchestrator | Sunday 29 March 2026 00:45:52 +0000 (0:00:00.319) 0:00:05.325 ********** 2026-03-29 00:45:53.935380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-29 00:45:53.935392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-29 00:45:53.935403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-29 00:45:53.935433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-29 00:45:53.935444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-29 00:45:53.935455 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-29 00:45:53.935466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-29 00:45:53.935476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-29 00:45:53.935487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-29 00:45:53.935498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-29 00:45:53.935508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-29 00:45:53.935525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-29 00:45:53.935543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-29 00:45:53.935557 | orchestrator | 2026-03-29 00:45:53.935567 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.935578 | orchestrator | Sunday 29 March 2026 00:45:52 +0000 (0:00:00.373) 0:00:05.699 ********** 2026-03-29 00:45:53.935588 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.935599 | orchestrator | 2026-03-29 00:45:53.935610 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.935620 | orchestrator | Sunday 29 March 2026 00:45:52 +0000 (0:00:00.173) 0:00:05.872 ********** 2026-03-29 00:45:53.935631 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.935648 | orchestrator | 2026-03-29 00:45:53.935672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.935781 | orchestrator | Sunday 29 March 2026 00:45:52 +0000 (0:00:00.198) 0:00:06.071 ********** 2026-03-29 00:45:53.935802 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.935820 | orchestrator | 2026-03-29 00:45:53.935838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.935855 | orchestrator | Sunday 29 March 2026 00:45:53 +0000 (0:00:00.211) 0:00:06.282 ********** 2026-03-29 00:45:53.935873 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.935907 | orchestrator | 2026-03-29 00:45:53.935925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.935943 | orchestrator | Sunday 29 March 2026 00:45:53 +0000 (0:00:00.194) 0:00:06.477 ********** 2026-03-29 00:45:53.935960 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.935977 | orchestrator | 2026-03-29 00:45:53.935995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.936012 | orchestrator | Sunday 29 March 2026 00:45:53 +0000 (0:00:00.235) 0:00:06.713 ********** 2026-03-29 00:45:53.936029 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.936046 | orchestrator | 2026-03-29 00:45:53.936063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:45:53.936081 | orchestrator | Sunday 29 March 2026 00:45:53 +0000 (0:00:00.188) 0:00:06.901 ********** 2026-03-29 00:45:53.936100 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:45:53.936119 | orchestrator | 2026-03-29 00:45:53.936152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:01.813909 | orchestrator | Sunday 29 March 2026 00:45:53 +0000 (0:00:00.212) 0:00:07.113 ********** 2026-03-29 00:46:01.813992 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814002 | orchestrator | 2026-03-29 00:46:01.814010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:01.814044 | orchestrator | Sunday 29 March 2026 00:45:54 +0000 (0:00:00.201) 0:00:07.314 ********** 2026-03-29 00:46:01.814052 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-29 00:46:01.814060 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-29 00:46:01.814066 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-29 00:46:01.814072 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-29 00:46:01.814080 | orchestrator | 2026-03-29 00:46:01.814087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:01.814094 | orchestrator | Sunday 29 March 2026 00:45:55 +0000 (0:00:01.010) 0:00:08.325 ********** 2026-03-29 00:46:01.814100 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814107 | orchestrator | 2026-03-29 00:46:01.814113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:01.814120 | orchestrator | Sunday 29 March 2026 00:45:55 +0000 (0:00:00.218) 0:00:08.544 ********** 2026-03-29 00:46:01.814127 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814133 | orchestrator | 2026-03-29 00:46:01.814140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:01.814147 | orchestrator | Sunday 29 March 2026 00:45:55 +0000 (0:00:00.222) 0:00:08.766 ********** 2026-03-29 00:46:01.814154 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814161 | orchestrator | 2026-03-29 00:46:01.814167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:01.814174 | orchestrator | Sunday 29 March 2026 00:45:55 +0000 (0:00:00.226) 0:00:08.993 ********** 2026-03-29 00:46:01.814181 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814188 | orchestrator | 2026-03-29 00:46:01.814194 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 00:46:01.814200 | orchestrator | Sunday 29 March 2026 00:45:56 +0000 (0:00:00.211) 0:00:09.204 ********** 2026-03-29 00:46:01.814206 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814212 | orchestrator | 2026-03-29 00:46:01.814218 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 00:46:01.814225 | orchestrator | Sunday 29 March 2026 00:45:56 +0000 (0:00:00.130) 0:00:09.335 ********** 2026-03-29 00:46:01.814235 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ec951f8f-e82d-5973-b083-619786b6a4a7'}}) 2026-03-29 00:46:01.814242 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}}) 2026-03-29 00:46:01.814249 | orchestrator | 2026-03-29 00:46:01.814256 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 00:46:01.814285 | orchestrator | Sunday 29 March 2026 00:45:56 +0000 (0:00:00.177) 0:00:09.513 ********** 2026-03-29 00:46:01.814294 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'}) 2026-03-29 00:46:01.814303 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}) 2026-03-29 00:46:01.814309 | orchestrator | 2026-03-29 00:46:01.814316 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 00:46:01.814323 | orchestrator | Sunday 29 March 2026 00:45:58 +0000 (0:00:02.012) 0:00:11.525 ********** 2026-03-29 00:46:01.814329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814344 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814350 | orchestrator | 2026-03-29 00:46:01.814356 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 00:46:01.814363 | orchestrator | Sunday 29 March 2026 00:45:58 +0000 (0:00:00.172) 0:00:11.698 ********** 2026-03-29 00:46:01.814369 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'}) 2026-03-29 00:46:01.814376 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}) 2026-03-29 00:46:01.814382 | orchestrator | 2026-03-29 00:46:01.814389 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 00:46:01.814396 | orchestrator | Sunday 29 March 2026 00:46:00 +0000 (0:00:01.515) 0:00:13.213 ********** 2026-03-29 00:46:01.814403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814416 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814423 | orchestrator | 2026-03-29 00:46:01.814430 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 00:46:01.814436 | orchestrator | Sunday 29 March 2026 00:46:00 +0000 (0:00:00.140) 0:00:13.354 ********** 2026-03-29 00:46:01.814459 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814469 | orchestrator | 2026-03-29 00:46:01.814478 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 00:46:01.814488 | orchestrator | Sunday 29 March 2026 00:46:00 +0000 (0:00:00.126) 0:00:13.480 ********** 2026-03-29 00:46:01.814497 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814516 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814524 | orchestrator | 2026-03-29 00:46:01.814532 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 00:46:01.814542 | orchestrator | Sunday 29 March 2026 00:46:00 +0000 (0:00:00.293) 0:00:13.774 ********** 2026-03-29 00:46:01.814552 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814563 | orchestrator | 2026-03-29 00:46:01.814571 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 00:46:01.814581 | orchestrator | Sunday 29 March 2026 00:46:00 +0000 (0:00:00.122) 0:00:13.897 ********** 2026-03-29 00:46:01.814600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814620 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814629 | orchestrator | 2026-03-29 00:46:01.814639 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 00:46:01.814649 | orchestrator | Sunday 29 March 2026 00:46:00 +0000 (0:00:00.132) 0:00:14.030 ********** 2026-03-29 00:46:01.814658 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814669 | orchestrator | 2026-03-29 00:46:01.814680 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 00:46:01.814689 | orchestrator | Sunday 29 March 2026 00:46:00 +0000 (0:00:00.127) 0:00:14.158 ********** 2026-03-29 00:46:01.814698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814727 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814736 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814744 | orchestrator | 2026-03-29 00:46:01.814754 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 00:46:01.814764 | orchestrator | Sunday 29 March 2026 00:46:01 +0000 (0:00:00.127) 0:00:14.285 ********** 2026-03-29 00:46:01.814774 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:01.814784 | orchestrator | 2026-03-29 00:46:01.814795 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 00:46:01.814821 | orchestrator | Sunday 29 March 2026 00:46:01 +0000 (0:00:00.134) 0:00:14.419 ********** 2026-03-29 00:46:01.814831 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814844 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814851 | orchestrator | 2026-03-29 00:46:01.814857 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 00:46:01.814864 | orchestrator | Sunday 29 March 2026 00:46:01 +0000 (0:00:00.157) 0:00:14.577 ********** 2026-03-29 00:46:01.814871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814884 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814891 | orchestrator | 2026-03-29 00:46:01.814898 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 00:46:01.814904 | orchestrator | Sunday 29 March 2026 00:46:01 +0000 (0:00:00.154) 0:00:14.731 ********** 2026-03-29 00:46:01.814911 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:01.814918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:01.814924 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814932 | orchestrator | 2026-03-29 00:46:01.814938 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 00:46:01.814944 | orchestrator | Sunday 29 March 2026 00:46:01 +0000 (0:00:00.138) 0:00:14.870 ********** 2026-03-29 00:46:01.814956 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:01.814963 | orchestrator | 2026-03-29 00:46:01.814970 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 00:46:01.814985 | orchestrator | Sunday 29 March 2026 00:46:01 +0000 (0:00:00.124) 0:00:14.995 ********** 2026-03-29 00:46:07.642009 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642156 | orchestrator | 2026-03-29 00:46:07.642170 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 00:46:07.642181 | orchestrator | Sunday 29 March 2026 00:46:01 +0000 (0:00:00.131) 0:00:15.126 ********** 2026-03-29 00:46:07.642189 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642197 | orchestrator | 2026-03-29 00:46:07.642206 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 00:46:07.642214 | orchestrator | Sunday 29 March 2026 00:46:02 +0000 (0:00:00.121) 0:00:15.247 ********** 2026-03-29 00:46:07.642223 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:46:07.642233 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 00:46:07.642241 | orchestrator | } 2026-03-29 00:46:07.642249 | orchestrator | 2026-03-29 00:46:07.642257 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 00:46:07.642265 | orchestrator | Sunday 29 March 2026 00:46:02 +0000 (0:00:00.265) 0:00:15.513 ********** 2026-03-29 00:46:07.642273 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:46:07.642280 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 00:46:07.642289 | orchestrator | } 2026-03-29 00:46:07.642296 | orchestrator | 2026-03-29 00:46:07.642305 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 00:46:07.642321 | orchestrator | Sunday 29 March 2026 00:46:02 +0000 (0:00:00.134) 0:00:15.647 ********** 2026-03-29 00:46:07.642331 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:46:07.642340 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 00:46:07.642348 | orchestrator | } 2026-03-29 00:46:07.642355 | orchestrator | 2026-03-29 00:46:07.642364 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 00:46:07.642372 | orchestrator | Sunday 29 March 2026 00:46:02 +0000 (0:00:00.129) 0:00:15.777 ********** 2026-03-29 00:46:07.642380 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:07.642388 | orchestrator | 2026-03-29 00:46:07.642396 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 00:46:07.642404 | orchestrator | Sunday 29 March 2026 00:46:03 +0000 (0:00:00.722) 0:00:16.500 ********** 2026-03-29 00:46:07.642411 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:07.642418 | orchestrator | 2026-03-29 00:46:07.642424 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 00:46:07.642431 | orchestrator | Sunday 29 March 2026 00:46:03 +0000 (0:00:00.497) 0:00:16.997 ********** 2026-03-29 00:46:07.642438 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:07.642445 | orchestrator | 2026-03-29 00:46:07.642452 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 00:46:07.642461 | orchestrator | Sunday 29 March 2026 00:46:04 +0000 (0:00:00.502) 0:00:17.500 ********** 2026-03-29 00:46:07.642469 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:07.642476 | orchestrator | 2026-03-29 00:46:07.642485 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 00:46:07.642493 | orchestrator | Sunday 29 March 2026 00:46:04 +0000 (0:00:00.129) 0:00:17.629 ********** 2026-03-29 00:46:07.642501 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642508 | orchestrator | 2026-03-29 00:46:07.642516 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 00:46:07.642524 | orchestrator | Sunday 29 March 2026 00:46:04 +0000 (0:00:00.103) 0:00:17.733 ********** 2026-03-29 00:46:07.642533 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642540 | orchestrator | 2026-03-29 00:46:07.642547 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 00:46:07.642580 | orchestrator | Sunday 29 March 2026 00:46:04 +0000 (0:00:00.096) 0:00:17.830 ********** 2026-03-29 00:46:07.642603 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:46:07.642611 | orchestrator |  "vgs_report": { 2026-03-29 00:46:07.642619 | orchestrator |  "vg": [] 2026-03-29 00:46:07.642627 | orchestrator |  } 2026-03-29 00:46:07.642635 | orchestrator | } 2026-03-29 00:46:07.642643 | orchestrator | 2026-03-29 00:46:07.642651 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 00:46:07.642659 | orchestrator | Sunday 29 March 2026 00:46:04 +0000 (0:00:00.136) 0:00:17.966 ********** 2026-03-29 00:46:07.642666 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642673 | orchestrator | 2026-03-29 00:46:07.642680 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 00:46:07.642688 | orchestrator | Sunday 29 March 2026 00:46:04 +0000 (0:00:00.130) 0:00:18.096 ********** 2026-03-29 00:46:07.642696 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642703 | orchestrator | 2026-03-29 00:46:07.642710 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 00:46:07.642785 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.128) 0:00:18.225 ********** 2026-03-29 00:46:07.642796 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642804 | orchestrator | 2026-03-29 00:46:07.642812 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 00:46:07.642820 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.250) 0:00:18.476 ********** 2026-03-29 00:46:07.642827 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642835 | orchestrator | 2026-03-29 00:46:07.642843 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 00:46:07.642851 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.126) 0:00:18.602 ********** 2026-03-29 00:46:07.642859 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642866 | orchestrator | 2026-03-29 00:46:07.642874 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 00:46:07.642882 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.126) 0:00:18.729 ********** 2026-03-29 00:46:07.642891 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642899 | orchestrator | 2026-03-29 00:46:07.642907 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 00:46:07.642915 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.139) 0:00:18.868 ********** 2026-03-29 00:46:07.642923 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642931 | orchestrator | 2026-03-29 00:46:07.642940 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 00:46:07.642946 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.163) 0:00:19.032 ********** 2026-03-29 00:46:07.642973 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.642980 | orchestrator | 2026-03-29 00:46:07.642987 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 00:46:07.642994 | orchestrator | Sunday 29 March 2026 00:46:05 +0000 (0:00:00.119) 0:00:19.151 ********** 2026-03-29 00:46:07.643001 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643007 | orchestrator | 2026-03-29 00:46:07.643014 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 00:46:07.643020 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:00.114) 0:00:19.265 ********** 2026-03-29 00:46:07.643027 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643033 | orchestrator | 2026-03-29 00:46:07.643040 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 00:46:07.643047 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:00.122) 0:00:19.388 ********** 2026-03-29 00:46:07.643053 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643059 | orchestrator | 2026-03-29 00:46:07.643066 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 00:46:07.643072 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:00.115) 0:00:19.503 ********** 2026-03-29 00:46:07.643090 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643097 | orchestrator | 2026-03-29 00:46:07.643104 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 00:46:07.643111 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:00.123) 0:00:19.626 ********** 2026-03-29 00:46:07.643118 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643125 | orchestrator | 2026-03-29 00:46:07.643132 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 00:46:07.643139 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:00.118) 0:00:19.744 ********** 2026-03-29 00:46:07.643146 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643152 | orchestrator | 2026-03-29 00:46:07.643159 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 00:46:07.643165 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:00.131) 0:00:19.876 ********** 2026-03-29 00:46:07.643173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:07.643182 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:07.643189 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643196 | orchestrator | 2026-03-29 00:46:07.643203 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 00:46:07.643210 | orchestrator | Sunday 29 March 2026 00:46:06 +0000 (0:00:00.272) 0:00:20.149 ********** 2026-03-29 00:46:07.643217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:07.643223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:07.643230 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643236 | orchestrator | 2026-03-29 00:46:07.643244 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 00:46:07.643251 | orchestrator | Sunday 29 March 2026 00:46:07 +0000 (0:00:00.140) 0:00:20.289 ********** 2026-03-29 00:46:07.643259 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:07.643265 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:07.643269 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643273 | orchestrator | 2026-03-29 00:46:07.643278 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 00:46:07.643282 | orchestrator | Sunday 29 March 2026 00:46:07 +0000 (0:00:00.133) 0:00:20.423 ********** 2026-03-29 00:46:07.643286 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:07.643291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:07.643295 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643299 | orchestrator | 2026-03-29 00:46:07.643304 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 00:46:07.643308 | orchestrator | Sunday 29 March 2026 00:46:07 +0000 (0:00:00.141) 0:00:20.565 ********** 2026-03-29 00:46:07.643312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:07.643317 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:07.643327 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:07.643332 | orchestrator | 2026-03-29 00:46:07.643336 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 00:46:07.643349 | orchestrator | Sunday 29 March 2026 00:46:07 +0000 (0:00:00.126) 0:00:20.692 ********** 2026-03-29 00:46:07.643362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:12.432352 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:12.432437 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:12.432447 | orchestrator | 2026-03-29 00:46:12.432456 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 00:46:12.432465 | orchestrator | Sunday 29 March 2026 00:46:07 +0000 (0:00:00.134) 0:00:20.826 ********** 2026-03-29 00:46:12.432472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:12.432479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:12.432485 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:12.432492 | orchestrator | 2026-03-29 00:46:12.432499 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 00:46:12.432505 | orchestrator | Sunday 29 March 2026 00:46:07 +0000 (0:00:00.141) 0:00:20.968 ********** 2026-03-29 00:46:12.432512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:12.432518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:12.432525 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:12.432532 | orchestrator | 2026-03-29 00:46:12.432538 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 00:46:12.432545 | orchestrator | Sunday 29 March 2026 00:46:07 +0000 (0:00:00.142) 0:00:21.110 ********** 2026-03-29 00:46:12.432551 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:12.432558 | orchestrator | 2026-03-29 00:46:12.432565 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 00:46:12.432572 | orchestrator | Sunday 29 March 2026 00:46:08 +0000 (0:00:00.531) 0:00:21.642 ********** 2026-03-29 00:46:12.432578 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:12.432584 | orchestrator | 2026-03-29 00:46:12.432591 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 00:46:12.432597 | orchestrator | Sunday 29 March 2026 00:46:08 +0000 (0:00:00.504) 0:00:22.146 ********** 2026-03-29 00:46:12.432604 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:46:12.432610 | orchestrator | 2026-03-29 00:46:12.432617 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 00:46:12.432633 | orchestrator | Sunday 29 March 2026 00:46:09 +0000 (0:00:00.155) 0:00:22.301 ********** 2026-03-29 00:46:12.432640 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'vg_name': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'}) 2026-03-29 00:46:12.432670 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'vg_name': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}) 2026-03-29 00:46:12.432677 | orchestrator | 2026-03-29 00:46:12.432684 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 00:46:12.432690 | orchestrator | Sunday 29 March 2026 00:46:09 +0000 (0:00:00.159) 0:00:22.461 ********** 2026-03-29 00:46:12.432698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:12.432722 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:12.432770 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:12.432777 | orchestrator | 2026-03-29 00:46:12.432784 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 00:46:12.432790 | orchestrator | Sunday 29 March 2026 00:46:09 +0000 (0:00:00.304) 0:00:22.765 ********** 2026-03-29 00:46:12.432797 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:12.432803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:12.432810 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:12.432816 | orchestrator | 2026-03-29 00:46:12.432823 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 00:46:12.432830 | orchestrator | Sunday 29 March 2026 00:46:09 +0000 (0:00:00.160) 0:00:22.926 ********** 2026-03-29 00:46:12.432836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'})  2026-03-29 00:46:12.432843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'})  2026-03-29 00:46:12.432849 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:46:12.432856 | orchestrator | 2026-03-29 00:46:12.432862 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 00:46:12.432869 | orchestrator | Sunday 29 March 2026 00:46:09 +0000 (0:00:00.171) 0:00:23.097 ********** 2026-03-29 00:46:12.432889 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 00:46:12.432895 | orchestrator |  "lvm_report": { 2026-03-29 00:46:12.432902 | orchestrator |  "lv": [ 2026-03-29 00:46:12.432909 | orchestrator |  { 2026-03-29 00:46:12.432915 | orchestrator |  "lv_name": "osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7", 2026-03-29 00:46:12.432922 | orchestrator |  "vg_name": "ceph-ec951f8f-e82d-5973-b083-619786b6a4a7" 2026-03-29 00:46:12.432928 | orchestrator |  }, 2026-03-29 00:46:12.432934 | orchestrator |  { 2026-03-29 00:46:12.432940 | orchestrator |  "lv_name": "osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8", 2026-03-29 00:46:12.432946 | orchestrator |  "vg_name": "ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8" 2026-03-29 00:46:12.432952 | orchestrator |  } 2026-03-29 00:46:12.432958 | orchestrator |  ], 2026-03-29 00:46:12.432964 | orchestrator |  "pv": [ 2026-03-29 00:46:12.432970 | orchestrator |  { 2026-03-29 00:46:12.432977 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 00:46:12.432983 | orchestrator |  "vg_name": "ceph-ec951f8f-e82d-5973-b083-619786b6a4a7" 2026-03-29 00:46:12.432990 | orchestrator |  }, 2026-03-29 00:46:12.432997 | orchestrator |  { 2026-03-29 00:46:12.433005 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 00:46:12.433012 | orchestrator |  "vg_name": "ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8" 2026-03-29 00:46:12.433019 | orchestrator |  } 2026-03-29 00:46:12.433026 | orchestrator |  ] 2026-03-29 00:46:12.433033 | orchestrator |  } 2026-03-29 00:46:12.433040 | orchestrator | } 2026-03-29 00:46:12.433047 | orchestrator | 2026-03-29 00:46:12.433054 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 00:46:12.433061 | orchestrator | 2026-03-29 00:46:12.433068 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:46:12.433075 | orchestrator | Sunday 29 March 2026 00:46:10 +0000 (0:00:00.274) 0:00:23.371 ********** 2026-03-29 00:46:12.433089 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-29 00:46:12.433096 | orchestrator | 2026-03-29 00:46:12.433103 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:46:12.433111 | orchestrator | Sunday 29 March 2026 00:46:10 +0000 (0:00:00.235) 0:00:23.606 ********** 2026-03-29 00:46:12.433118 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:12.433124 | orchestrator | 2026-03-29 00:46:12.433130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:12.433136 | orchestrator | Sunday 29 March 2026 00:46:10 +0000 (0:00:00.221) 0:00:23.828 ********** 2026-03-29 00:46:12.433144 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:46:12.433151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:46:12.433158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:46:12.433165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:46:12.433172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:46:12.433178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:46:12.433186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:46:12.433198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:46:12.433205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-29 00:46:12.433213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:46:12.433220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:46:12.433227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:46:12.433234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:46:12.433241 | orchestrator | 2026-03-29 00:46:12.433248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:12.433255 | orchestrator | Sunday 29 March 2026 00:46:11 +0000 (0:00:00.403) 0:00:24.232 ********** 2026-03-29 00:46:12.433262 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:12.433269 | orchestrator | 2026-03-29 00:46:12.433276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:12.433283 | orchestrator | Sunday 29 March 2026 00:46:11 +0000 (0:00:00.176) 0:00:24.408 ********** 2026-03-29 00:46:12.433289 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:12.433295 | orchestrator | 2026-03-29 00:46:12.433302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:12.433308 | orchestrator | Sunday 29 March 2026 00:46:11 +0000 (0:00:00.186) 0:00:24.595 ********** 2026-03-29 00:46:12.433315 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:12.433321 | orchestrator | 2026-03-29 00:46:12.433327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:12.433334 | orchestrator | Sunday 29 March 2026 00:46:11 +0000 (0:00:00.475) 0:00:25.070 ********** 2026-03-29 00:46:12.433340 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:12.433347 | orchestrator | 2026-03-29 00:46:12.433353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:12.433360 | orchestrator | Sunday 29 March 2026 00:46:12 +0000 (0:00:00.180) 0:00:25.251 ********** 2026-03-29 00:46:12.433366 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:12.433373 | orchestrator | 2026-03-29 00:46:12.433379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:12.433386 | orchestrator | Sunday 29 March 2026 00:46:12 +0000 (0:00:00.179) 0:00:25.430 ********** 2026-03-29 00:46:12.433397 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:12.433404 | orchestrator | 2026-03-29 00:46:12.433414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:22.830226 | orchestrator | Sunday 29 March 2026 00:46:12 +0000 (0:00:00.186) 0:00:25.616 ********** 2026-03-29 00:46:22.830318 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830332 | orchestrator | 2026-03-29 00:46:22.830342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:22.830351 | orchestrator | Sunday 29 March 2026 00:46:12 +0000 (0:00:00.175) 0:00:25.792 ********** 2026-03-29 00:46:22.830359 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830367 | orchestrator | 2026-03-29 00:46:22.830376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:22.830384 | orchestrator | Sunday 29 March 2026 00:46:12 +0000 (0:00:00.184) 0:00:25.976 ********** 2026-03-29 00:46:22.830392 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28) 2026-03-29 00:46:22.830401 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28) 2026-03-29 00:46:22.830409 | orchestrator | 2026-03-29 00:46:22.830418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:22.830426 | orchestrator | Sunday 29 March 2026 00:46:13 +0000 (0:00:00.369) 0:00:26.346 ********** 2026-03-29 00:46:22.830434 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c) 2026-03-29 00:46:22.830442 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c) 2026-03-29 00:46:22.830450 | orchestrator | 2026-03-29 00:46:22.830458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:22.830466 | orchestrator | Sunday 29 March 2026 00:46:13 +0000 (0:00:00.436) 0:00:26.782 ********** 2026-03-29 00:46:22.830474 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d) 2026-03-29 00:46:22.830482 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d) 2026-03-29 00:46:22.830490 | orchestrator | 2026-03-29 00:46:22.830498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:22.830506 | orchestrator | Sunday 29 March 2026 00:46:13 +0000 (0:00:00.382) 0:00:27.165 ********** 2026-03-29 00:46:22.830514 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53) 2026-03-29 00:46:22.830522 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53) 2026-03-29 00:46:22.830530 | orchestrator | 2026-03-29 00:46:22.830538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:22.830546 | orchestrator | Sunday 29 March 2026 00:46:14 +0000 (0:00:00.658) 0:00:27.823 ********** 2026-03-29 00:46:22.830554 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:46:22.830562 | orchestrator | 2026-03-29 00:46:22.830570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.830578 | orchestrator | Sunday 29 March 2026 00:46:15 +0000 (0:00:00.491) 0:00:28.314 ********** 2026-03-29 00:46:22.830586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-29 00:46:22.830595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-29 00:46:22.830603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-29 00:46:22.830611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-29 00:46:22.830619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-29 00:46:22.830642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-29 00:46:22.830669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-29 00:46:22.830678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-29 00:46:22.830686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-29 00:46:22.830694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-29 00:46:22.830702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-29 00:46:22.830710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-29 00:46:22.830717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-29 00:46:22.830725 | orchestrator | 2026-03-29 00:46:22.830733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.830741 | orchestrator | Sunday 29 March 2026 00:46:15 +0000 (0:00:00.724) 0:00:29.038 ********** 2026-03-29 00:46:22.830780 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830789 | orchestrator | 2026-03-29 00:46:22.830798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.830807 | orchestrator | Sunday 29 March 2026 00:46:16 +0000 (0:00:00.189) 0:00:29.228 ********** 2026-03-29 00:46:22.830816 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830826 | orchestrator | 2026-03-29 00:46:22.830834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.830843 | orchestrator | Sunday 29 March 2026 00:46:16 +0000 (0:00:00.193) 0:00:29.422 ********** 2026-03-29 00:46:22.830852 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830861 | orchestrator | 2026-03-29 00:46:22.830886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.830896 | orchestrator | Sunday 29 March 2026 00:46:16 +0000 (0:00:00.207) 0:00:29.629 ********** 2026-03-29 00:46:22.830905 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830913 | orchestrator | 2026-03-29 00:46:22.830922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.830931 | orchestrator | Sunday 29 March 2026 00:46:16 +0000 (0:00:00.186) 0:00:29.816 ********** 2026-03-29 00:46:22.830940 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830949 | orchestrator | 2026-03-29 00:46:22.830958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.830966 | orchestrator | Sunday 29 March 2026 00:46:16 +0000 (0:00:00.186) 0:00:30.002 ********** 2026-03-29 00:46:22.830975 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.830984 | orchestrator | 2026-03-29 00:46:22.830993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.831002 | orchestrator | Sunday 29 March 2026 00:46:17 +0000 (0:00:00.206) 0:00:30.209 ********** 2026-03-29 00:46:22.831010 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831019 | orchestrator | 2026-03-29 00:46:22.831028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.831037 | orchestrator | Sunday 29 March 2026 00:46:17 +0000 (0:00:00.191) 0:00:30.400 ********** 2026-03-29 00:46:22.831046 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831054 | orchestrator | 2026-03-29 00:46:22.831063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.831073 | orchestrator | Sunday 29 March 2026 00:46:17 +0000 (0:00:00.190) 0:00:30.591 ********** 2026-03-29 00:46:22.831082 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-29 00:46:22.831091 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-29 00:46:22.831100 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-29 00:46:22.831109 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-29 00:46:22.831118 | orchestrator | 2026-03-29 00:46:22.831127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.831141 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.732) 0:00:31.323 ********** 2026-03-29 00:46:22.831149 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831157 | orchestrator | 2026-03-29 00:46:22.831165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.831172 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.179) 0:00:31.502 ********** 2026-03-29 00:46:22.831180 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831188 | orchestrator | 2026-03-29 00:46:22.831196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.831204 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.476) 0:00:31.979 ********** 2026-03-29 00:46:22.831211 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831219 | orchestrator | 2026-03-29 00:46:22.831227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:22.831235 | orchestrator | Sunday 29 March 2026 00:46:18 +0000 (0:00:00.192) 0:00:32.171 ********** 2026-03-29 00:46:22.831243 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831251 | orchestrator | 2026-03-29 00:46:22.831259 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 00:46:22.831292 | orchestrator | Sunday 29 March 2026 00:46:19 +0000 (0:00:00.183) 0:00:32.355 ********** 2026-03-29 00:46:22.831300 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831308 | orchestrator | 2026-03-29 00:46:22.831316 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 00:46:22.831324 | orchestrator | Sunday 29 March 2026 00:46:19 +0000 (0:00:00.130) 0:00:32.485 ********** 2026-03-29 00:46:22.831332 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '00df2b4e-a360-5652-a277-e346f3e9f535'}}) 2026-03-29 00:46:22.831340 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35a0cf9a-662c-5baf-94a5-8e3a66aae069'}}) 2026-03-29 00:46:22.831348 | orchestrator | 2026-03-29 00:46:22.831355 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 00:46:22.831363 | orchestrator | Sunday 29 March 2026 00:46:19 +0000 (0:00:00.174) 0:00:32.660 ********** 2026-03-29 00:46:22.831372 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'}) 2026-03-29 00:46:22.831381 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'}) 2026-03-29 00:46:22.831389 | orchestrator | 2026-03-29 00:46:22.831397 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 00:46:22.831405 | orchestrator | Sunday 29 March 2026 00:46:21 +0000 (0:00:01.879) 0:00:34.539 ********** 2026-03-29 00:46:22.831413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:22.831422 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:22.831430 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:22.831438 | orchestrator | 2026-03-29 00:46:22.831445 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 00:46:22.831453 | orchestrator | Sunday 29 March 2026 00:46:21 +0000 (0:00:00.129) 0:00:34.669 ********** 2026-03-29 00:46:22.831461 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'}) 2026-03-29 00:46:22.831474 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'}) 2026-03-29 00:46:27.640187 | orchestrator | 2026-03-29 00:46:27.640256 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 00:46:27.640280 | orchestrator | Sunday 29 March 2026 00:46:22 +0000 (0:00:01.341) 0:00:36.011 ********** 2026-03-29 00:46:27.640285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:27.640291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:27.640295 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640300 | orchestrator | 2026-03-29 00:46:27.640305 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 00:46:27.640309 | orchestrator | Sunday 29 March 2026 00:46:22 +0000 (0:00:00.139) 0:00:36.151 ********** 2026-03-29 00:46:27.640312 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640316 | orchestrator | 2026-03-29 00:46:27.640320 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 00:46:27.640324 | orchestrator | Sunday 29 March 2026 00:46:23 +0000 (0:00:00.132) 0:00:36.283 ********** 2026-03-29 00:46:27.640328 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:27.640332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:27.640336 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640340 | orchestrator | 2026-03-29 00:46:27.640344 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 00:46:27.640347 | orchestrator | Sunday 29 March 2026 00:46:23 +0000 (0:00:00.134) 0:00:36.417 ********** 2026-03-29 00:46:27.640351 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640355 | orchestrator | 2026-03-29 00:46:27.640359 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 00:46:27.640362 | orchestrator | Sunday 29 March 2026 00:46:23 +0000 (0:00:00.131) 0:00:36.548 ********** 2026-03-29 00:46:27.640366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:27.640370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:27.640374 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640378 | orchestrator | 2026-03-29 00:46:27.640381 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 00:46:27.640394 | orchestrator | Sunday 29 March 2026 00:46:23 +0000 (0:00:00.269) 0:00:36.818 ********** 2026-03-29 00:46:27.640398 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640402 | orchestrator | 2026-03-29 00:46:27.640406 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 00:46:27.640410 | orchestrator | Sunday 29 March 2026 00:46:23 +0000 (0:00:00.133) 0:00:36.951 ********** 2026-03-29 00:46:27.640414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:27.640417 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:27.640421 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640425 | orchestrator | 2026-03-29 00:46:27.640429 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 00:46:27.640433 | orchestrator | Sunday 29 March 2026 00:46:23 +0000 (0:00:00.128) 0:00:37.080 ********** 2026-03-29 00:46:27.640436 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:27.640441 | orchestrator | 2026-03-29 00:46:27.640445 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 00:46:27.640453 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.116) 0:00:37.196 ********** 2026-03-29 00:46:27.640457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:27.640461 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:27.640465 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640468 | orchestrator | 2026-03-29 00:46:27.640472 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 00:46:27.640476 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.126) 0:00:37.323 ********** 2026-03-29 00:46:27.640480 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:27.640483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:27.640487 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640491 | orchestrator | 2026-03-29 00:46:27.640495 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 00:46:27.640509 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.115) 0:00:37.439 ********** 2026-03-29 00:46:27.640513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:27.640517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:27.640521 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640524 | orchestrator | 2026-03-29 00:46:27.640528 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 00:46:27.640532 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.130) 0:00:37.570 ********** 2026-03-29 00:46:27.640536 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640540 | orchestrator | 2026-03-29 00:46:27.640543 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 00:46:27.640547 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.119) 0:00:37.689 ********** 2026-03-29 00:46:27.640551 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640555 | orchestrator | 2026-03-29 00:46:27.640559 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 00:46:27.640562 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.117) 0:00:37.807 ********** 2026-03-29 00:46:27.640566 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640570 | orchestrator | 2026-03-29 00:46:27.640574 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 00:46:27.640577 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.120) 0:00:37.927 ********** 2026-03-29 00:46:27.640581 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:46:27.640585 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 00:46:27.640589 | orchestrator | } 2026-03-29 00:46:27.640593 | orchestrator | 2026-03-29 00:46:27.640597 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 00:46:27.640601 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.121) 0:00:38.048 ********** 2026-03-29 00:46:27.640605 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:46:27.640608 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 00:46:27.640612 | orchestrator | } 2026-03-29 00:46:27.640616 | orchestrator | 2026-03-29 00:46:27.640620 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 00:46:27.640624 | orchestrator | Sunday 29 March 2026 00:46:24 +0000 (0:00:00.118) 0:00:38.167 ********** 2026-03-29 00:46:27.640631 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:46:27.640634 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 00:46:27.640638 | orchestrator | } 2026-03-29 00:46:27.640642 | orchestrator | 2026-03-29 00:46:27.640646 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 00:46:27.640650 | orchestrator | Sunday 29 March 2026 00:46:25 +0000 (0:00:00.274) 0:00:38.442 ********** 2026-03-29 00:46:27.640653 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:27.640657 | orchestrator | 2026-03-29 00:46:27.640661 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 00:46:27.640665 | orchestrator | Sunday 29 March 2026 00:46:25 +0000 (0:00:00.508) 0:00:38.950 ********** 2026-03-29 00:46:27.640669 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:27.640673 | orchestrator | 2026-03-29 00:46:27.640676 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 00:46:27.640680 | orchestrator | Sunday 29 March 2026 00:46:26 +0000 (0:00:00.461) 0:00:39.412 ********** 2026-03-29 00:46:27.640684 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:27.640688 | orchestrator | 2026-03-29 00:46:27.640692 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 00:46:27.640695 | orchestrator | Sunday 29 March 2026 00:46:26 +0000 (0:00:00.474) 0:00:39.887 ********** 2026-03-29 00:46:27.640699 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:27.640703 | orchestrator | 2026-03-29 00:46:27.640785 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 00:46:27.640791 | orchestrator | Sunday 29 March 2026 00:46:26 +0000 (0:00:00.132) 0:00:40.020 ********** 2026-03-29 00:46:27.640795 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640799 | orchestrator | 2026-03-29 00:46:27.640810 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 00:46:27.640814 | orchestrator | Sunday 29 March 2026 00:46:26 +0000 (0:00:00.098) 0:00:40.119 ********** 2026-03-29 00:46:27.640819 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640823 | orchestrator | 2026-03-29 00:46:27.640827 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 00:46:27.640832 | orchestrator | Sunday 29 March 2026 00:46:27 +0000 (0:00:00.094) 0:00:40.213 ********** 2026-03-29 00:46:27.640836 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:46:27.640840 | orchestrator |  "vgs_report": { 2026-03-29 00:46:27.640845 | orchestrator |  "vg": [] 2026-03-29 00:46:27.640849 | orchestrator |  } 2026-03-29 00:46:27.640852 | orchestrator | } 2026-03-29 00:46:27.640856 | orchestrator | 2026-03-29 00:46:27.640860 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 00:46:27.640864 | orchestrator | Sunday 29 March 2026 00:46:27 +0000 (0:00:00.129) 0:00:40.342 ********** 2026-03-29 00:46:27.640868 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640871 | orchestrator | 2026-03-29 00:46:27.640875 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 00:46:27.640879 | orchestrator | Sunday 29 March 2026 00:46:27 +0000 (0:00:00.125) 0:00:40.468 ********** 2026-03-29 00:46:27.640883 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640887 | orchestrator | 2026-03-29 00:46:27.640890 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 00:46:27.640894 | orchestrator | Sunday 29 March 2026 00:46:27 +0000 (0:00:00.115) 0:00:40.583 ********** 2026-03-29 00:46:27.640898 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640902 | orchestrator | 2026-03-29 00:46:27.640906 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 00:46:27.640910 | orchestrator | Sunday 29 March 2026 00:46:27 +0000 (0:00:00.119) 0:00:40.703 ********** 2026-03-29 00:46:27.640913 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:27.640917 | orchestrator | 2026-03-29 00:46:27.640925 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 00:46:31.871294 | orchestrator | Sunday 29 March 2026 00:46:27 +0000 (0:00:00.119) 0:00:40.822 ********** 2026-03-29 00:46:31.871431 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871445 | orchestrator | 2026-03-29 00:46:31.871454 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 00:46:31.871463 | orchestrator | Sunday 29 March 2026 00:46:27 +0000 (0:00:00.260) 0:00:41.083 ********** 2026-03-29 00:46:31.871471 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871479 | orchestrator | 2026-03-29 00:46:31.871487 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 00:46:31.871495 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.127) 0:00:41.210 ********** 2026-03-29 00:46:31.871503 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871510 | orchestrator | 2026-03-29 00:46:31.871518 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 00:46:31.871526 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.132) 0:00:41.343 ********** 2026-03-29 00:46:31.871534 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871542 | orchestrator | 2026-03-29 00:46:31.871549 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 00:46:31.871557 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.127) 0:00:41.471 ********** 2026-03-29 00:46:31.871565 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871574 | orchestrator | 2026-03-29 00:46:31.871587 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 00:46:31.871599 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.128) 0:00:41.599 ********** 2026-03-29 00:46:31.871611 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871623 | orchestrator | 2026-03-29 00:46:31.871635 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 00:46:31.871647 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.129) 0:00:41.729 ********** 2026-03-29 00:46:31.871659 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871671 | orchestrator | 2026-03-29 00:46:31.871684 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 00:46:31.871698 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.126) 0:00:41.856 ********** 2026-03-29 00:46:31.871710 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871724 | orchestrator | 2026-03-29 00:46:31.871737 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 00:46:31.871750 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.125) 0:00:41.981 ********** 2026-03-29 00:46:31.871787 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871803 | orchestrator | 2026-03-29 00:46:31.871818 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 00:46:31.871831 | orchestrator | Sunday 29 March 2026 00:46:28 +0000 (0:00:00.118) 0:00:42.100 ********** 2026-03-29 00:46:31.871839 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871849 | orchestrator | 2026-03-29 00:46:31.871859 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 00:46:31.871882 | orchestrator | Sunday 29 March 2026 00:46:29 +0000 (0:00:00.126) 0:00:42.227 ********** 2026-03-29 00:46:31.871892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.871903 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.871913 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871922 | orchestrator | 2026-03-29 00:46:31.871930 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 00:46:31.871939 | orchestrator | Sunday 29 March 2026 00:46:29 +0000 (0:00:00.145) 0:00:42.372 ********** 2026-03-29 00:46:31.871948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.871966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.871975 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.871984 | orchestrator | 2026-03-29 00:46:31.871993 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 00:46:31.872002 | orchestrator | Sunday 29 March 2026 00:46:29 +0000 (0:00:00.133) 0:00:42.506 ********** 2026-03-29 00:46:31.872011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.872029 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.872037 | orchestrator | 2026-03-29 00:46:31.872044 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 00:46:31.872052 | orchestrator | Sunday 29 March 2026 00:46:29 +0000 (0:00:00.265) 0:00:42.771 ********** 2026-03-29 00:46:31.872060 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.872076 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.872084 | orchestrator | 2026-03-29 00:46:31.872108 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 00:46:31.872116 | orchestrator | Sunday 29 March 2026 00:46:29 +0000 (0:00:00.145) 0:00:42.916 ********** 2026-03-29 00:46:31.872124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.872140 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.872148 | orchestrator | 2026-03-29 00:46:31.872156 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 00:46:31.872163 | orchestrator | Sunday 29 March 2026 00:46:29 +0000 (0:00:00.147) 0:00:43.064 ********** 2026-03-29 00:46:31.872171 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872180 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.872188 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.872196 | orchestrator | 2026-03-29 00:46:31.872203 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 00:46:31.872211 | orchestrator | Sunday 29 March 2026 00:46:29 +0000 (0:00:00.122) 0:00:43.186 ********** 2026-03-29 00:46:31.872219 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.872235 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.872242 | orchestrator | 2026-03-29 00:46:31.872250 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 00:46:31.872258 | orchestrator | Sunday 29 March 2026 00:46:30 +0000 (0:00:00.144) 0:00:43.331 ********** 2026-03-29 00:46:31.872266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872279 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.872290 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.872299 | orchestrator | 2026-03-29 00:46:31.872306 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 00:46:31.872314 | orchestrator | Sunday 29 March 2026 00:46:30 +0000 (0:00:00.141) 0:00:43.472 ********** 2026-03-29 00:46:31.872322 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:31.872330 | orchestrator | 2026-03-29 00:46:31.872338 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 00:46:31.872346 | orchestrator | Sunday 29 March 2026 00:46:30 +0000 (0:00:00.509) 0:00:43.982 ********** 2026-03-29 00:46:31.872353 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:31.872361 | orchestrator | 2026-03-29 00:46:31.872369 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 00:46:31.872377 | orchestrator | Sunday 29 March 2026 00:46:31 +0000 (0:00:00.498) 0:00:44.481 ********** 2026-03-29 00:46:31.872385 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:46:31.872393 | orchestrator | 2026-03-29 00:46:31.872401 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 00:46:31.872408 | orchestrator | Sunday 29 March 2026 00:46:31 +0000 (0:00:00.136) 0:00:44.617 ********** 2026-03-29 00:46:31.872416 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'vg_name': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'}) 2026-03-29 00:46:31.872426 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'vg_name': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'}) 2026-03-29 00:46:31.872434 | orchestrator | 2026-03-29 00:46:31.872442 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 00:46:31.872449 | orchestrator | Sunday 29 March 2026 00:46:31 +0000 (0:00:00.160) 0:00:44.777 ********** 2026-03-29 00:46:31.872457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872465 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:31.872473 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:31.872485 | orchestrator | 2026-03-29 00:46:31.872497 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 00:46:31.872511 | orchestrator | Sunday 29 March 2026 00:46:31 +0000 (0:00:00.129) 0:00:44.907 ********** 2026-03-29 00:46:31.872524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:31.872544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:37.317839 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:37.317967 | orchestrator | 2026-03-29 00:46:37.317986 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 00:46:37.318000 | orchestrator | Sunday 29 March 2026 00:46:31 +0000 (0:00:00.148) 0:00:45.055 ********** 2026-03-29 00:46:37.318011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'})  2026-03-29 00:46:37.318082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'})  2026-03-29 00:46:37.318094 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:46:37.318105 | orchestrator | 2026-03-29 00:46:37.318117 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 00:46:37.318154 | orchestrator | Sunday 29 March 2026 00:46:32 +0000 (0:00:00.148) 0:00:45.204 ********** 2026-03-29 00:46:37.318166 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 00:46:37.318177 | orchestrator |  "lvm_report": { 2026-03-29 00:46:37.318190 | orchestrator |  "lv": [ 2026-03-29 00:46:37.318201 | orchestrator |  { 2026-03-29 00:46:37.318212 | orchestrator |  "lv_name": "osd-block-00df2b4e-a360-5652-a277-e346f3e9f535", 2026-03-29 00:46:37.318224 | orchestrator |  "vg_name": "ceph-00df2b4e-a360-5652-a277-e346f3e9f535" 2026-03-29 00:46:37.318235 | orchestrator |  }, 2026-03-29 00:46:37.318246 | orchestrator |  { 2026-03-29 00:46:37.318257 | orchestrator |  "lv_name": "osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069", 2026-03-29 00:46:37.318268 | orchestrator |  "vg_name": "ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069" 2026-03-29 00:46:37.318279 | orchestrator |  } 2026-03-29 00:46:37.318290 | orchestrator |  ], 2026-03-29 00:46:37.318301 | orchestrator |  "pv": [ 2026-03-29 00:46:37.318312 | orchestrator |  { 2026-03-29 00:46:37.318323 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 00:46:37.318334 | orchestrator |  "vg_name": "ceph-00df2b4e-a360-5652-a277-e346f3e9f535" 2026-03-29 00:46:37.318347 | orchestrator |  }, 2026-03-29 00:46:37.318359 | orchestrator |  { 2026-03-29 00:46:37.318372 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 00:46:37.318384 | orchestrator |  "vg_name": "ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069" 2026-03-29 00:46:37.318397 | orchestrator |  } 2026-03-29 00:46:37.318409 | orchestrator |  ] 2026-03-29 00:46:37.318421 | orchestrator |  } 2026-03-29 00:46:37.318435 | orchestrator | } 2026-03-29 00:46:37.318456 | orchestrator | 2026-03-29 00:46:37.318470 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-29 00:46:37.318482 | orchestrator | 2026-03-29 00:46:37.318495 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 00:46:37.318507 | orchestrator | Sunday 29 March 2026 00:46:32 +0000 (0:00:00.380) 0:00:45.584 ********** 2026-03-29 00:46:37.318520 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-29 00:46:37.318532 | orchestrator | 2026-03-29 00:46:37.318544 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-29 00:46:37.318557 | orchestrator | Sunday 29 March 2026 00:46:32 +0000 (0:00:00.224) 0:00:45.809 ********** 2026-03-29 00:46:37.318570 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:37.318582 | orchestrator | 2026-03-29 00:46:37.318595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.318607 | orchestrator | Sunday 29 March 2026 00:46:32 +0000 (0:00:00.208) 0:00:46.017 ********** 2026-03-29 00:46:37.318620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:46:37.318633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:46:37.318645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:46:37.318657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:46:37.318669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:46:37.318682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:46:37.318694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:46:37.318706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:46:37.318717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-29 00:46:37.318728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:46:37.318747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:46:37.318758 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:46:37.318866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:46:37.318888 | orchestrator | 2026-03-29 00:46:37.318905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.318931 | orchestrator | Sunday 29 March 2026 00:46:33 +0000 (0:00:00.364) 0:00:46.382 ********** 2026-03-29 00:46:37.318951 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.318969 | orchestrator | 2026-03-29 00:46:37.318987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319007 | orchestrator | Sunday 29 March 2026 00:46:33 +0000 (0:00:00.205) 0:00:46.587 ********** 2026-03-29 00:46:37.319024 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.319041 | orchestrator | 2026-03-29 00:46:37.319053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319084 | orchestrator | Sunday 29 March 2026 00:46:33 +0000 (0:00:00.176) 0:00:46.764 ********** 2026-03-29 00:46:37.319095 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.319106 | orchestrator | 2026-03-29 00:46:37.319117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319128 | orchestrator | Sunday 29 March 2026 00:46:33 +0000 (0:00:00.179) 0:00:46.944 ********** 2026-03-29 00:46:37.319139 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.319150 | orchestrator | 2026-03-29 00:46:37.319161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319218 | orchestrator | Sunday 29 March 2026 00:46:33 +0000 (0:00:00.178) 0:00:47.122 ********** 2026-03-29 00:46:37.319230 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.319241 | orchestrator | 2026-03-29 00:46:37.319251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319262 | orchestrator | Sunday 29 March 2026 00:46:34 +0000 (0:00:00.480) 0:00:47.603 ********** 2026-03-29 00:46:37.319273 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.319284 | orchestrator | 2026-03-29 00:46:37.319295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319305 | orchestrator | Sunday 29 March 2026 00:46:34 +0000 (0:00:00.197) 0:00:47.800 ********** 2026-03-29 00:46:37.319316 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.319327 | orchestrator | 2026-03-29 00:46:37.319338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319349 | orchestrator | Sunday 29 March 2026 00:46:34 +0000 (0:00:00.174) 0:00:47.974 ********** 2026-03-29 00:46:37.319359 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:37.319370 | orchestrator | 2026-03-29 00:46:37.319381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319392 | orchestrator | Sunday 29 March 2026 00:46:34 +0000 (0:00:00.195) 0:00:48.170 ********** 2026-03-29 00:46:37.319403 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8) 2026-03-29 00:46:37.319415 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8) 2026-03-29 00:46:37.319426 | orchestrator | 2026-03-29 00:46:37.319444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319461 | orchestrator | Sunday 29 March 2026 00:46:35 +0000 (0:00:00.382) 0:00:48.553 ********** 2026-03-29 00:46:37.319479 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41) 2026-03-29 00:46:37.319495 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41) 2026-03-29 00:46:37.319507 | orchestrator | 2026-03-29 00:46:37.319517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319546 | orchestrator | Sunday 29 March 2026 00:46:35 +0000 (0:00:00.382) 0:00:48.936 ********** 2026-03-29 00:46:37.319557 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89) 2026-03-29 00:46:37.319568 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89) 2026-03-29 00:46:37.319579 | orchestrator | 2026-03-29 00:46:37.319590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319600 | orchestrator | Sunday 29 March 2026 00:46:36 +0000 (0:00:00.437) 0:00:49.373 ********** 2026-03-29 00:46:37.319611 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c) 2026-03-29 00:46:37.319622 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c) 2026-03-29 00:46:37.319633 | orchestrator | 2026-03-29 00:46:37.319644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-29 00:46:37.319654 | orchestrator | Sunday 29 March 2026 00:46:36 +0000 (0:00:00.412) 0:00:49.786 ********** 2026-03-29 00:46:37.319665 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-29 00:46:37.319676 | orchestrator | 2026-03-29 00:46:37.319686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:37.319697 | orchestrator | Sunday 29 March 2026 00:46:36 +0000 (0:00:00.323) 0:00:50.109 ********** 2026-03-29 00:46:37.319708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-29 00:46:37.319719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-29 00:46:37.319733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-29 00:46:37.319752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-29 00:46:37.319793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-29 00:46:37.319811 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-29 00:46:37.319828 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-29 00:46:37.319846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-29 00:46:37.319863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-29 00:46:37.319879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-29 00:46:37.319894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-29 00:46:37.319923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-29 00:46:45.924999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-29 00:46:45.925067 | orchestrator | 2026-03-29 00:46:45.925079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925087 | orchestrator | Sunday 29 March 2026 00:46:37 +0000 (0:00:00.380) 0:00:50.490 ********** 2026-03-29 00:46:45.925095 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925103 | orchestrator | 2026-03-29 00:46:45.925110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925116 | orchestrator | Sunday 29 March 2026 00:46:37 +0000 (0:00:00.172) 0:00:50.662 ********** 2026-03-29 00:46:45.925124 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925131 | orchestrator | 2026-03-29 00:46:45.925138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925145 | orchestrator | Sunday 29 March 2026 00:46:37 +0000 (0:00:00.469) 0:00:51.131 ********** 2026-03-29 00:46:45.925152 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925176 | orchestrator | 2026-03-29 00:46:45.925184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925191 | orchestrator | Sunday 29 March 2026 00:46:38 +0000 (0:00:00.226) 0:00:51.357 ********** 2026-03-29 00:46:45.925198 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925205 | orchestrator | 2026-03-29 00:46:45.925212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925219 | orchestrator | Sunday 29 March 2026 00:46:38 +0000 (0:00:00.181) 0:00:51.538 ********** 2026-03-29 00:46:45.925226 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925233 | orchestrator | 2026-03-29 00:46:45.925240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925247 | orchestrator | Sunday 29 March 2026 00:46:38 +0000 (0:00:00.178) 0:00:51.717 ********** 2026-03-29 00:46:45.925254 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925261 | orchestrator | 2026-03-29 00:46:45.925269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925276 | orchestrator | Sunday 29 March 2026 00:46:38 +0000 (0:00:00.207) 0:00:51.925 ********** 2026-03-29 00:46:45.925283 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925290 | orchestrator | 2026-03-29 00:46:45.925297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925304 | orchestrator | Sunday 29 March 2026 00:46:38 +0000 (0:00:00.210) 0:00:52.136 ********** 2026-03-29 00:46:45.925310 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925323 | orchestrator | 2026-03-29 00:46:45.925330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925336 | orchestrator | Sunday 29 March 2026 00:46:39 +0000 (0:00:00.183) 0:00:52.320 ********** 2026-03-29 00:46:45.925343 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-29 00:46:45.925359 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-29 00:46:45.925366 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-29 00:46:45.925373 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-29 00:46:45.925379 | orchestrator | 2026-03-29 00:46:45.925385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925393 | orchestrator | Sunday 29 March 2026 00:46:39 +0000 (0:00:00.593) 0:00:52.913 ********** 2026-03-29 00:46:45.925399 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925406 | orchestrator | 2026-03-29 00:46:45.925413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925419 | orchestrator | Sunday 29 March 2026 00:46:39 +0000 (0:00:00.181) 0:00:53.094 ********** 2026-03-29 00:46:45.925426 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925433 | orchestrator | 2026-03-29 00:46:45.925440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925447 | orchestrator | Sunday 29 March 2026 00:46:40 +0000 (0:00:00.172) 0:00:53.267 ********** 2026-03-29 00:46:45.925454 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925461 | orchestrator | 2026-03-29 00:46:45.925468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-29 00:46:45.925475 | orchestrator | Sunday 29 March 2026 00:46:40 +0000 (0:00:00.183) 0:00:53.451 ********** 2026-03-29 00:46:45.925482 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925489 | orchestrator | 2026-03-29 00:46:45.925495 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-29 00:46:45.925502 | orchestrator | Sunday 29 March 2026 00:46:40 +0000 (0:00:00.167) 0:00:53.619 ********** 2026-03-29 00:46:45.925508 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925515 | orchestrator | 2026-03-29 00:46:45.925522 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-29 00:46:45.925529 | orchestrator | Sunday 29 March 2026 00:46:40 +0000 (0:00:00.227) 0:00:53.847 ********** 2026-03-29 00:46:45.925536 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '687a2d88-e62e-55f7-9995-e7b8ae522292'}}) 2026-03-29 00:46:45.925549 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}}) 2026-03-29 00:46:45.925556 | orchestrator | 2026-03-29 00:46:45.925563 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-29 00:46:45.925570 | orchestrator | Sunday 29 March 2026 00:46:40 +0000 (0:00:00.172) 0:00:54.020 ********** 2026-03-29 00:46:45.925578 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'}) 2026-03-29 00:46:45.925586 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}) 2026-03-29 00:46:45.925593 | orchestrator | 2026-03-29 00:46:45.925599 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-29 00:46:45.925616 | orchestrator | Sunday 29 March 2026 00:46:42 +0000 (0:00:02.073) 0:00:56.093 ********** 2026-03-29 00:46:45.925624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:45.925632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:45.925639 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925646 | orchestrator | 2026-03-29 00:46:45.925652 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-29 00:46:45.925660 | orchestrator | Sunday 29 March 2026 00:46:43 +0000 (0:00:00.145) 0:00:56.239 ********** 2026-03-29 00:46:45.925667 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'}) 2026-03-29 00:46:45.925674 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}) 2026-03-29 00:46:45.925681 | orchestrator | 2026-03-29 00:46:45.925688 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-29 00:46:45.925694 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:01.509) 0:00:57.748 ********** 2026-03-29 00:46:45.925701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:45.925707 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:45.925714 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925721 | orchestrator | 2026-03-29 00:46:45.925728 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-29 00:46:45.925735 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.124) 0:00:57.873 ********** 2026-03-29 00:46:45.925741 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925747 | orchestrator | 2026-03-29 00:46:45.925753 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-29 00:46:45.925759 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.137) 0:00:58.011 ********** 2026-03-29 00:46:45.925766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:45.925775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:45.925836 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925845 | orchestrator | 2026-03-29 00:46:45.925849 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-29 00:46:45.925853 | orchestrator | Sunday 29 March 2026 00:46:44 +0000 (0:00:00.130) 0:00:58.141 ********** 2026-03-29 00:46:45.925862 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925866 | orchestrator | 2026-03-29 00:46:45.925869 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-29 00:46:45.925873 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.122) 0:00:58.263 ********** 2026-03-29 00:46:45.925877 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:45.925881 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:45.925885 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925888 | orchestrator | 2026-03-29 00:46:45.925892 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-29 00:46:45.925896 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.139) 0:00:58.403 ********** 2026-03-29 00:46:45.925899 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925903 | orchestrator | 2026-03-29 00:46:45.925907 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-29 00:46:45.925911 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.129) 0:00:58.533 ********** 2026-03-29 00:46:45.925915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:45.925918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:45.925922 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:45.925926 | orchestrator | 2026-03-29 00:46:45.925930 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-29 00:46:45.925933 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.138) 0:00:58.672 ********** 2026-03-29 00:46:45.925937 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:45.925941 | orchestrator | 2026-03-29 00:46:45.925945 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-29 00:46:45.925949 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.281) 0:00:58.953 ********** 2026-03-29 00:46:45.925957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:51.588864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:51.588915 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.588921 | orchestrator | 2026-03-29 00:46:51.588926 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-29 00:46:51.588931 | orchestrator | Sunday 29 March 2026 00:46:45 +0000 (0:00:00.154) 0:00:59.108 ********** 2026-03-29 00:46:51.588935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:51.588940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:51.588944 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.588948 | orchestrator | 2026-03-29 00:46:51.588952 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-29 00:46:51.588956 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.137) 0:00:59.246 ********** 2026-03-29 00:46:51.588960 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:51.588964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:51.588978 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.588982 | orchestrator | 2026-03-29 00:46:51.588986 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-29 00:46:51.588990 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.146) 0:00:59.392 ********** 2026-03-29 00:46:51.588994 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.588998 | orchestrator | 2026-03-29 00:46:51.589001 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-29 00:46:51.589005 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.122) 0:00:59.515 ********** 2026-03-29 00:46:51.589009 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589013 | orchestrator | 2026-03-29 00:46:51.589017 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-29 00:46:51.589020 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.140) 0:00:59.656 ********** 2026-03-29 00:46:51.589024 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589028 | orchestrator | 2026-03-29 00:46:51.589032 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-29 00:46:51.589036 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.106) 0:00:59.762 ********** 2026-03-29 00:46:51.589040 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:46:51.589044 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-29 00:46:51.589048 | orchestrator | } 2026-03-29 00:46:51.589052 | orchestrator | 2026-03-29 00:46:51.589055 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-29 00:46:51.589059 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.129) 0:00:59.892 ********** 2026-03-29 00:46:51.589063 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:46:51.589067 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-29 00:46:51.589071 | orchestrator | } 2026-03-29 00:46:51.589075 | orchestrator | 2026-03-29 00:46:51.589079 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-29 00:46:51.589083 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.133) 0:01:00.025 ********** 2026-03-29 00:46:51.589087 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:46:51.589090 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-29 00:46:51.589094 | orchestrator | } 2026-03-29 00:46:51.589098 | orchestrator | 2026-03-29 00:46:51.589102 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-29 00:46:51.589106 | orchestrator | Sunday 29 March 2026 00:46:46 +0000 (0:00:00.127) 0:01:00.153 ********** 2026-03-29 00:46:51.589109 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:51.589113 | orchestrator | 2026-03-29 00:46:51.589117 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-29 00:46:51.589121 | orchestrator | Sunday 29 March 2026 00:46:47 +0000 (0:00:00.504) 0:01:00.657 ********** 2026-03-29 00:46:51.589124 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:51.589128 | orchestrator | 2026-03-29 00:46:51.589132 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-29 00:46:51.589136 | orchestrator | Sunday 29 March 2026 00:46:47 +0000 (0:00:00.482) 0:01:01.139 ********** 2026-03-29 00:46:51.589140 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:51.589143 | orchestrator | 2026-03-29 00:46:51.589147 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-29 00:46:51.589151 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:00.656) 0:01:01.796 ********** 2026-03-29 00:46:51.589155 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:51.589159 | orchestrator | 2026-03-29 00:46:51.589162 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-29 00:46:51.589166 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:00.146) 0:01:01.943 ********** 2026-03-29 00:46:51.589170 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589174 | orchestrator | 2026-03-29 00:46:51.589178 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-29 00:46:51.589184 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:00.104) 0:01:02.047 ********** 2026-03-29 00:46:51.589188 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589192 | orchestrator | 2026-03-29 00:46:51.589196 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-29 00:46:51.589207 | orchestrator | Sunday 29 March 2026 00:46:48 +0000 (0:00:00.112) 0:01:02.160 ********** 2026-03-29 00:46:51.589212 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:46:51.589215 | orchestrator |  "vgs_report": { 2026-03-29 00:46:51.589219 | orchestrator |  "vg": [] 2026-03-29 00:46:51.589230 | orchestrator |  } 2026-03-29 00:46:51.589235 | orchestrator | } 2026-03-29 00:46:51.589239 | orchestrator | 2026-03-29 00:46:51.589242 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-29 00:46:51.589246 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.142) 0:01:02.302 ********** 2026-03-29 00:46:51.589250 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589254 | orchestrator | 2026-03-29 00:46:51.589258 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-29 00:46:51.589262 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.141) 0:01:02.443 ********** 2026-03-29 00:46:51.589266 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589269 | orchestrator | 2026-03-29 00:46:51.589273 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-29 00:46:51.589277 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.128) 0:01:02.572 ********** 2026-03-29 00:46:51.589281 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589285 | orchestrator | 2026-03-29 00:46:51.589289 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-29 00:46:51.589292 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.127) 0:01:02.699 ********** 2026-03-29 00:46:51.589296 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589300 | orchestrator | 2026-03-29 00:46:51.589304 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-29 00:46:51.589308 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.124) 0:01:02.824 ********** 2026-03-29 00:46:51.589312 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589315 | orchestrator | 2026-03-29 00:46:51.589319 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-29 00:46:51.589323 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.127) 0:01:02.952 ********** 2026-03-29 00:46:51.589327 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589331 | orchestrator | 2026-03-29 00:46:51.589338 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-29 00:46:51.589344 | orchestrator | Sunday 29 March 2026 00:46:49 +0000 (0:00:00.129) 0:01:03.081 ********** 2026-03-29 00:46:51.589351 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589357 | orchestrator | 2026-03-29 00:46:51.589364 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-29 00:46:51.589371 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:00.128) 0:01:03.209 ********** 2026-03-29 00:46:51.589378 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589386 | orchestrator | 2026-03-29 00:46:51.589394 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-29 00:46:51.589400 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:00.280) 0:01:03.489 ********** 2026-03-29 00:46:51.589406 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589414 | orchestrator | 2026-03-29 00:46:51.589421 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-29 00:46:51.589426 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:00.140) 0:01:03.630 ********** 2026-03-29 00:46:51.589431 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589437 | orchestrator | 2026-03-29 00:46:51.589444 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-29 00:46:51.589450 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:00.159) 0:01:03.790 ********** 2026-03-29 00:46:51.589461 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589470 | orchestrator | 2026-03-29 00:46:51.589482 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-29 00:46:51.589491 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:00.127) 0:01:03.917 ********** 2026-03-29 00:46:51.589498 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589504 | orchestrator | 2026-03-29 00:46:51.589511 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-29 00:46:51.589518 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:00.133) 0:01:04.051 ********** 2026-03-29 00:46:51.589524 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589531 | orchestrator | 2026-03-29 00:46:51.589538 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-29 00:46:51.589544 | orchestrator | Sunday 29 March 2026 00:46:50 +0000 (0:00:00.133) 0:01:04.185 ********** 2026-03-29 00:46:51.589551 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589558 | orchestrator | 2026-03-29 00:46:51.589564 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-29 00:46:51.589571 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:00.142) 0:01:04.327 ********** 2026-03-29 00:46:51.589577 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:51.589584 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:51.589590 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589596 | orchestrator | 2026-03-29 00:46:51.589603 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-29 00:46:51.589608 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:00.144) 0:01:04.472 ********** 2026-03-29 00:46:51.589612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:51.589616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:51.589620 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:51.589624 | orchestrator | 2026-03-29 00:46:51.589627 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-29 00:46:51.589631 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:00.148) 0:01:04.621 ********** 2026-03-29 00:46:51.589640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501513 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501523 | orchestrator | 2026-03-29 00:46:54.501531 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-29 00:46:54.501539 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:00.151) 0:01:04.772 ********** 2026-03-29 00:46:54.501545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501558 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501565 | orchestrator | 2026-03-29 00:46:54.501571 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-29 00:46:54.501577 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:00.161) 0:01:04.934 ********** 2026-03-29 00:46:54.501602 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501613 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501618 | orchestrator | 2026-03-29 00:46:54.501624 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-29 00:46:54.501629 | orchestrator | Sunday 29 March 2026 00:46:51 +0000 (0:00:00.139) 0:01:05.074 ********** 2026-03-29 00:46:54.501634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501655 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501660 | orchestrator | 2026-03-29 00:46:54.501666 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-29 00:46:54.501671 | orchestrator | Sunday 29 March 2026 00:46:52 +0000 (0:00:00.300) 0:01:05.374 ********** 2026-03-29 00:46:54.501676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501687 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501693 | orchestrator | 2026-03-29 00:46:54.501700 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-29 00:46:54.501706 | orchestrator | Sunday 29 March 2026 00:46:52 +0000 (0:00:00.161) 0:01:05.536 ********** 2026-03-29 00:46:54.501712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501724 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501730 | orchestrator | 2026-03-29 00:46:54.501737 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-29 00:46:54.501743 | orchestrator | Sunday 29 March 2026 00:46:52 +0000 (0:00:00.164) 0:01:05.701 ********** 2026-03-29 00:46:54.501750 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:54.501757 | orchestrator | 2026-03-29 00:46:54.501763 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-29 00:46:54.501770 | orchestrator | Sunday 29 March 2026 00:46:53 +0000 (0:00:00.512) 0:01:06.213 ********** 2026-03-29 00:46:54.501776 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:54.501782 | orchestrator | 2026-03-29 00:46:54.501789 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-29 00:46:54.501829 | orchestrator | Sunday 29 March 2026 00:46:53 +0000 (0:00:00.506) 0:01:06.720 ********** 2026-03-29 00:46:54.501837 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:46:54.501843 | orchestrator | 2026-03-29 00:46:54.501850 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-29 00:46:54.501856 | orchestrator | Sunday 29 March 2026 00:46:53 +0000 (0:00:00.147) 0:01:06.867 ********** 2026-03-29 00:46:54.501863 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'vg_name': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'}) 2026-03-29 00:46:54.501871 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'vg_name': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}) 2026-03-29 00:46:54.501884 | orchestrator | 2026-03-29 00:46:54.501891 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-29 00:46:54.501897 | orchestrator | Sunday 29 March 2026 00:46:53 +0000 (0:00:00.173) 0:01:07.041 ********** 2026-03-29 00:46:54.501919 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501933 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501940 | orchestrator | 2026-03-29 00:46:54.501947 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-29 00:46:54.501953 | orchestrator | Sunday 29 March 2026 00:46:53 +0000 (0:00:00.136) 0:01:07.178 ********** 2026-03-29 00:46:54.501961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.501967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.501973 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.501979 | orchestrator | 2026-03-29 00:46:54.501986 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-29 00:46:54.501993 | orchestrator | Sunday 29 March 2026 00:46:54 +0000 (0:00:00.161) 0:01:07.339 ********** 2026-03-29 00:46:54.501999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'})  2026-03-29 00:46:54.502006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'})  2026-03-29 00:46:54.502064 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:46:54.502071 | orchestrator | 2026-03-29 00:46:54.502077 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-29 00:46:54.502083 | orchestrator | Sunday 29 March 2026 00:46:54 +0000 (0:00:00.182) 0:01:07.522 ********** 2026-03-29 00:46:54.502089 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 00:46:54.502096 | orchestrator |  "lvm_report": { 2026-03-29 00:46:54.502103 | orchestrator |  "lv": [ 2026-03-29 00:46:54.502110 | orchestrator |  { 2026-03-29 00:46:54.502117 | orchestrator |  "lv_name": "osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292", 2026-03-29 00:46:54.502130 | orchestrator |  "vg_name": "ceph-687a2d88-e62e-55f7-9995-e7b8ae522292" 2026-03-29 00:46:54.502137 | orchestrator |  }, 2026-03-29 00:46:54.502143 | orchestrator |  { 2026-03-29 00:46:54.502149 | orchestrator |  "lv_name": "osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef", 2026-03-29 00:46:54.502156 | orchestrator |  "vg_name": "ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef" 2026-03-29 00:46:54.502162 | orchestrator |  } 2026-03-29 00:46:54.502168 | orchestrator |  ], 2026-03-29 00:46:54.502176 | orchestrator |  "pv": [ 2026-03-29 00:46:54.502183 | orchestrator |  { 2026-03-29 00:46:54.502189 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-29 00:46:54.502197 | orchestrator |  "vg_name": "ceph-687a2d88-e62e-55f7-9995-e7b8ae522292" 2026-03-29 00:46:54.502204 | orchestrator |  }, 2026-03-29 00:46:54.502211 | orchestrator |  { 2026-03-29 00:46:54.502218 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-29 00:46:54.502225 | orchestrator |  "vg_name": "ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef" 2026-03-29 00:46:54.502231 | orchestrator |  } 2026-03-29 00:46:54.502237 | orchestrator |  ] 2026-03-29 00:46:54.502243 | orchestrator |  } 2026-03-29 00:46:54.502250 | orchestrator | } 2026-03-29 00:46:54.502263 | orchestrator | 2026-03-29 00:46:54.502270 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:46:54.502276 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 00:46:54.502283 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 00:46:54.502290 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-29 00:46:54.502296 | orchestrator | 2026-03-29 00:46:54.502302 | orchestrator | 2026-03-29 00:46:54.502309 | orchestrator | 2026-03-29 00:46:54.502315 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:46:54.502321 | orchestrator | Sunday 29 March 2026 00:46:54 +0000 (0:00:00.135) 0:01:07.657 ********** 2026-03-29 00:46:54.502327 | orchestrator | =============================================================================== 2026-03-29 00:46:54.502333 | orchestrator | Create block VGs -------------------------------------------------------- 5.97s 2026-03-29 00:46:54.502340 | orchestrator | Create block LVs -------------------------------------------------------- 4.37s 2026-03-29 00:46:54.502346 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.74s 2026-03-29 00:46:54.502352 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.63s 2026-03-29 00:46:54.502358 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2026-03-29 00:46:54.502364 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.51s 2026-03-29 00:46:54.502370 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2026-03-29 00:46:54.502377 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.44s 2026-03-29 00:46:54.502390 | orchestrator | Add known links to the list of available block devices ------------------ 1.25s 2026-03-29 00:46:54.789761 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2026-03-29 00:46:54.789832 | orchestrator | Print LVM report data --------------------------------------------------- 0.79s 2026-03-29 00:46:54.789837 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-29 00:46:54.789841 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-03-29 00:46:54.789844 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-29 00:46:54.789847 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-29 00:46:54.789850 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2026-03-29 00:46:54.789853 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-29 00:46:54.789856 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.57s 2026-03-29 00:46:54.789859 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-03-29 00:46:54.789862 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.56s 2026-03-29 00:47:07.144338 | orchestrator | 2026-03-29 00:47:07 | INFO  | Task 0095bf04-ff72-4f07-b373-a23924d0228b (facts) was prepared for execution. 2026-03-29 00:47:07.144424 | orchestrator | 2026-03-29 00:47:07 | INFO  | It takes a moment until task 0095bf04-ff72-4f07-b373-a23924d0228b (facts) has been started and output is visible here. 2026-03-29 00:47:18.689961 | orchestrator | 2026-03-29 00:47:18.690051 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 00:47:18.690058 | orchestrator | 2026-03-29 00:47:18.690062 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 00:47:18.690065 | orchestrator | Sunday 29 March 2026 00:47:11 +0000 (0:00:00.251) 0:00:00.251 ********** 2026-03-29 00:47:18.690082 | orchestrator | ok: [testbed-manager] 2026-03-29 00:47:18.690087 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:47:18.690090 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:47:18.690093 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:47:18.690096 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:47:18.690099 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:47:18.690102 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:47:18.690105 | orchestrator | 2026-03-29 00:47:18.690108 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 00:47:18.690112 | orchestrator | Sunday 29 March 2026 00:47:12 +0000 (0:00:01.201) 0:00:01.453 ********** 2026-03-29 00:47:18.690115 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:47:18.690119 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:47:18.690122 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:47:18.690125 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:47:18.690128 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:47:18.690132 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:47:18.690135 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:47:18.690138 | orchestrator | 2026-03-29 00:47:18.690141 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 00:47:18.690144 | orchestrator | 2026-03-29 00:47:18.690148 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 00:47:18.690151 | orchestrator | Sunday 29 March 2026 00:47:13 +0000 (0:00:01.166) 0:00:02.619 ********** 2026-03-29 00:47:18.690155 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:47:18.690160 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:47:18.690165 | orchestrator | ok: [testbed-manager] 2026-03-29 00:47:18.690170 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:47:18.690175 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:47:18.690180 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:47:18.690183 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:47:18.690186 | orchestrator | 2026-03-29 00:47:18.690192 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 00:47:18.690196 | orchestrator | 2026-03-29 00:47:18.690201 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 00:47:18.690213 | orchestrator | Sunday 29 March 2026 00:47:17 +0000 (0:00:04.504) 0:00:07.124 ********** 2026-03-29 00:47:18.690218 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:47:18.690223 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:47:18.690228 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:47:18.690233 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:47:18.690238 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:47:18.690248 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:47:18.690253 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:47:18.690264 | orchestrator | 2026-03-29 00:47:18.690269 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:47:18.690273 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:18.690279 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:18.690286 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:18.690293 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:18.690298 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:18.690303 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:18.690307 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:18.690319 | orchestrator | 2026-03-29 00:47:18.690324 | orchestrator | 2026-03-29 00:47:18.690329 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:47:18.690334 | orchestrator | Sunday 29 March 2026 00:47:18 +0000 (0:00:00.480) 0:00:07.605 ********** 2026-03-29 00:47:18.690338 | orchestrator | =============================================================================== 2026-03-29 00:47:18.690342 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.50s 2026-03-29 00:47:18.690345 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.20s 2026-03-29 00:47:18.690348 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-03-29 00:47:18.690351 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-03-29 00:47:30.963594 | orchestrator | 2026-03-29 00:47:30 | INFO  | Task 526f5825-9bf3-412a-9d96-8fde795c6716 (frr) was prepared for execution. 2026-03-29 00:47:30.963672 | orchestrator | 2026-03-29 00:47:30 | INFO  | It takes a moment until task 526f5825-9bf3-412a-9d96-8fde795c6716 (frr) has been started and output is visible here. 2026-03-29 00:47:55.114642 | orchestrator | 2026-03-29 00:47:55.114704 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-29 00:47:55.114713 | orchestrator | 2026-03-29 00:47:55.114720 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-29 00:47:55.114738 | orchestrator | Sunday 29 March 2026 00:47:34 +0000 (0:00:00.218) 0:00:00.218 ********** 2026-03-29 00:47:55.114744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:47:55.114752 | orchestrator | 2026-03-29 00:47:55.114759 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-29 00:47:55.114765 | orchestrator | Sunday 29 March 2026 00:47:35 +0000 (0:00:00.195) 0:00:00.413 ********** 2026-03-29 00:47:55.114772 | orchestrator | changed: [testbed-manager] 2026-03-29 00:47:55.114779 | orchestrator | 2026-03-29 00:47:55.114785 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-29 00:47:55.114791 | orchestrator | Sunday 29 March 2026 00:47:36 +0000 (0:00:01.118) 0:00:01.532 ********** 2026-03-29 00:47:55.114800 | orchestrator | changed: [testbed-manager] 2026-03-29 00:47:55.114806 | orchestrator | 2026-03-29 00:47:55.114812 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-29 00:47:55.114818 | orchestrator | Sunday 29 March 2026 00:47:45 +0000 (0:00:08.798) 0:00:10.330 ********** 2026-03-29 00:47:55.114824 | orchestrator | ok: [testbed-manager] 2026-03-29 00:47:55.114831 | orchestrator | 2026-03-29 00:47:55.114838 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-29 00:47:55.114844 | orchestrator | Sunday 29 March 2026 00:47:46 +0000 (0:00:00.988) 0:00:11.319 ********** 2026-03-29 00:47:55.114849 | orchestrator | changed: [testbed-manager] 2026-03-29 00:47:55.114856 | orchestrator | 2026-03-29 00:47:55.114862 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-29 00:47:55.114868 | orchestrator | Sunday 29 March 2026 00:47:47 +0000 (0:00:00.998) 0:00:12.318 ********** 2026-03-29 00:47:55.114875 | orchestrator | ok: [testbed-manager] 2026-03-29 00:47:55.114879 | orchestrator | 2026-03-29 00:47:55.114925 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-29 00:47:55.114930 | orchestrator | Sunday 29 March 2026 00:47:48 +0000 (0:00:01.248) 0:00:13.566 ********** 2026-03-29 00:47:55.114934 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:47:55.114938 | orchestrator | 2026-03-29 00:47:55.114942 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-29 00:47:55.114946 | orchestrator | Sunday 29 March 2026 00:47:48 +0000 (0:00:00.169) 0:00:13.736 ********** 2026-03-29 00:47:55.114950 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:47:55.114969 | orchestrator | 2026-03-29 00:47:55.114976 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-29 00:47:55.114982 | orchestrator | Sunday 29 March 2026 00:47:48 +0000 (0:00:00.145) 0:00:13.881 ********** 2026-03-29 00:47:55.114989 | orchestrator | changed: [testbed-manager] 2026-03-29 00:47:55.114996 | orchestrator | 2026-03-29 00:47:55.115003 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-29 00:47:55.115011 | orchestrator | Sunday 29 March 2026 00:47:49 +0000 (0:00:01.002) 0:00:14.884 ********** 2026-03-29 00:47:55.115018 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-29 00:47:55.115023 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-29 00:47:55.115028 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-29 00:47:55.115032 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-29 00:47:55.115035 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-29 00:47:55.115048 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-29 00:47:55.115056 | orchestrator | 2026-03-29 00:47:55.115062 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-29 00:47:55.115069 | orchestrator | Sunday 29 March 2026 00:47:51 +0000 (0:00:02.194) 0:00:17.079 ********** 2026-03-29 00:47:55.115075 | orchestrator | ok: [testbed-manager] 2026-03-29 00:47:55.115081 | orchestrator | 2026-03-29 00:47:55.115088 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-29 00:47:55.115101 | orchestrator | Sunday 29 March 2026 00:47:53 +0000 (0:00:01.656) 0:00:18.735 ********** 2026-03-29 00:47:55.115108 | orchestrator | changed: [testbed-manager] 2026-03-29 00:47:55.115113 | orchestrator | 2026-03-29 00:47:55.115120 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:47:55.115125 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:47:55.115129 | orchestrator | 2026-03-29 00:47:55.115136 | orchestrator | 2026-03-29 00:47:55.115142 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:47:55.115148 | orchestrator | Sunday 29 March 2026 00:47:54 +0000 (0:00:01.381) 0:00:20.116 ********** 2026-03-29 00:47:55.115155 | orchestrator | =============================================================================== 2026-03-29 00:47:55.115161 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.80s 2026-03-29 00:47:55.115168 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.19s 2026-03-29 00:47:55.115174 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.66s 2026-03-29 00:47:55.115181 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2026-03-29 00:47:55.115188 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.25s 2026-03-29 00:47:55.115205 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.12s 2026-03-29 00:47:55.115235 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.00s 2026-03-29 00:47:55.115242 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.00s 2026-03-29 00:47:55.115249 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2026-03-29 00:47:55.115255 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-03-29 00:47:55.115262 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-03-29 00:47:55.115268 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-29 00:47:55.313781 | orchestrator | 2026-03-29 00:47:55.315381 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Mar 29 00:47:55 UTC 2026 2026-03-29 00:47:55.315430 | orchestrator | 2026-03-29 00:47:57.079178 | orchestrator | 2026-03-29 00:47:57 | INFO  | Collection nutshell is prepared for execution 2026-03-29 00:47:57.079252 | orchestrator | 2026-03-29 00:47:57 | INFO  | A [0] - dotfiles 2026-03-29 00:48:07.156250 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [0] - homer 2026-03-29 00:48:07.156372 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [0] - netdata 2026-03-29 00:48:07.156388 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [0] - openstackclient 2026-03-29 00:48:07.156400 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [0] - phpmyadmin 2026-03-29 00:48:07.156410 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [0] - common 2026-03-29 00:48:07.161322 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- loadbalancer 2026-03-29 00:48:07.161423 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [2] --- opensearch 2026-03-29 00:48:07.161439 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [2] --- mariadb-ng 2026-03-29 00:48:07.162196 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [3] ---- horizon 2026-03-29 00:48:07.162237 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [3] ---- keystone 2026-03-29 00:48:07.162257 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- neutron 2026-03-29 00:48:07.162270 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [5] ------ wait-for-nova 2026-03-29 00:48:07.162283 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [6] ------- octavia 2026-03-29 00:48:07.163738 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- barbican 2026-03-29 00:48:07.163866 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- designate 2026-03-29 00:48:07.163891 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- ironic 2026-03-29 00:48:07.164121 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- placement 2026-03-29 00:48:07.164662 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- magnum 2026-03-29 00:48:07.165021 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- openvswitch 2026-03-29 00:48:07.165248 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [2] --- ovn 2026-03-29 00:48:07.165690 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- memcached 2026-03-29 00:48:07.165712 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- redis 2026-03-29 00:48:07.165837 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- rabbitmq-ng 2026-03-29 00:48:07.166385 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [0] - kubernetes 2026-03-29 00:48:07.169151 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- kubeconfig 2026-03-29 00:48:07.169253 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- copy-kubeconfig 2026-03-29 00:48:07.169779 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [0] - ceph 2026-03-29 00:48:07.172334 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [1] -- ceph-pools 2026-03-29 00:48:07.172636 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [2] --- copy-ceph-keys 2026-03-29 00:48:07.172792 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [3] ---- cephclient 2026-03-29 00:48:07.172822 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-29 00:48:07.172846 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- wait-for-keystone 2026-03-29 00:48:07.173018 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-29 00:48:07.173212 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [5] ------ glance 2026-03-29 00:48:07.173767 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [5] ------ cinder 2026-03-29 00:48:07.173844 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [5] ------ nova 2026-03-29 00:48:07.174299 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [4] ----- prometheus 2026-03-29 00:48:07.174331 | orchestrator | 2026-03-29 00:48:07 | INFO  | A [5] ------ grafana 2026-03-29 00:48:07.357446 | orchestrator | 2026-03-29 00:48:07 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-29 00:48:07.357526 | orchestrator | 2026-03-29 00:48:07 | INFO  | Tasks are running in the background 2026-03-29 00:48:10.125741 | orchestrator | 2026-03-29 00:48:10 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-29 00:48:12.232036 | orchestrator | 2026-03-29 00:48:12 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:12.232586 | orchestrator | 2026-03-29 00:48:12 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:12.235080 | orchestrator | 2026-03-29 00:48:12 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:12.235722 | orchestrator | 2026-03-29 00:48:12 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:12.236225 | orchestrator | 2026-03-29 00:48:12 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:12.236789 | orchestrator | 2026-03-29 00:48:12 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:12.237487 | orchestrator | 2026-03-29 00:48:12 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state STARTED 2026-03-29 00:48:12.237601 | orchestrator | 2026-03-29 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:15.300822 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:15.300897 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:15.300993 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:15.300999 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:15.301003 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:15.301008 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:15.301012 | orchestrator | 2026-03-29 00:48:15 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state STARTED 2026-03-29 00:48:15.301016 | orchestrator | 2026-03-29 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:18.326366 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:18.331605 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:18.331854 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:18.332003 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:18.338178 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:18.340870 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:18.343900 | orchestrator | 2026-03-29 00:48:18 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state STARTED 2026-03-29 00:48:18.344059 | orchestrator | 2026-03-29 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:21.645695 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:21.757392 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:21.862982 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:21.863426 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:21.864020 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:21.864721 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:21.865452 | orchestrator | 2026-03-29 00:48:21 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state STARTED 2026-03-29 00:48:21.865999 | orchestrator | 2026-03-29 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:24.950518 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:24.951860 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:24.952959 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:24.955053 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:24.955730 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:24.956583 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:24.959655 | orchestrator | 2026-03-29 00:48:24 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state STARTED 2026-03-29 00:48:24.959700 | orchestrator | 2026-03-29 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:28.005963 | orchestrator | 2026-03-29 00:48:28 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:28.007056 | orchestrator | 2026-03-29 00:48:28 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:28.046748 | orchestrator | 2026-03-29 00:48:28 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:28.046796 | orchestrator | 2026-03-29 00:48:28 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:28.046802 | orchestrator | 2026-03-29 00:48:28 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:28.046808 | orchestrator | 2026-03-29 00:48:28 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:28.046813 | orchestrator | 2026-03-29 00:48:28 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state STARTED 2026-03-29 00:48:28.046818 | orchestrator | 2026-03-29 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:31.127760 | orchestrator | 2026-03-29 00:48:31 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:31.127815 | orchestrator | 2026-03-29 00:48:31 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:31.127821 | orchestrator | 2026-03-29 00:48:31 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:31.127839 | orchestrator | 2026-03-29 00:48:31 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:31.127844 | orchestrator | 2026-03-29 00:48:31 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:31.127847 | orchestrator | 2026-03-29 00:48:31 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:31.129588 | orchestrator | 2026-03-29 00:48:31 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state STARTED 2026-03-29 00:48:31.129624 | orchestrator | 2026-03-29 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:34.196816 | orchestrator | 2026-03-29 00:48:34 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:34.199819 | orchestrator | 2026-03-29 00:48:34 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:34.199853 | orchestrator | 2026-03-29 00:48:34 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:34.201228 | orchestrator | 2026-03-29 00:48:34 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:34.201250 | orchestrator | 2026-03-29 00:48:34 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:34.201750 | orchestrator | 2026-03-29 00:48:34 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:34.214307 | orchestrator | 2026-03-29 00:48:34 | INFO  | Task 5bcefa7b-e3f4-4e85-80fa-0eea4f46aa9e is in state SUCCESS 2026-03-29 00:48:34.214867 | orchestrator | 2026-03-29 00:48:34.214887 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-29 00:48:34.214897 | orchestrator | 2026-03-29 00:48:34.214906 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-29 00:48:34.214915 | orchestrator | Sunday 29 March 2026 00:48:19 +0000 (0:00:00.784) 0:00:00.784 ********** 2026-03-29 00:48:34.215009 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:48:34.215019 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:48:34.215028 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:48:34.215037 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:48:34.215046 | orchestrator | changed: [testbed-manager] 2026-03-29 00:48:34.215054 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:48:34.215063 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:48:34.215071 | orchestrator | 2026-03-29 00:48:34.215079 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-29 00:48:34.215088 | orchestrator | Sunday 29 March 2026 00:48:23 +0000 (0:00:03.862) 0:00:04.647 ********** 2026-03-29 00:48:34.215097 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-29 00:48:34.215106 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-29 00:48:34.215115 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-29 00:48:34.215123 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-29 00:48:34.215132 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-29 00:48:34.215141 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-29 00:48:34.215149 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-29 00:48:34.215158 | orchestrator | 2026-03-29 00:48:34.215166 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-29 00:48:34.215173 | orchestrator | Sunday 29 March 2026 00:48:24 +0000 (0:00:01.659) 0:00:06.306 ********** 2026-03-29 00:48:34.215180 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:48:23.819436', 'end': '2026-03-29 00:48:23.826408', 'delta': '0:00:00.006972', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:48:34.215200 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:48:23.905936', 'end': '2026-03-29 00:48:23.911002', 'delta': '0:00:00.005066', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:48:34.215206 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:48:24.083344', 'end': '2026-03-29 00:48:24.089343', 'delta': '0:00:00.005999', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:48:34.215222 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:48:24.079683', 'end': '2026-03-29 00:48:24.086970', 'delta': '0:00:00.007287', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:48:34.215344 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:48:24.248202', 'end': '2026-03-29 00:48:24.252138', 'delta': '0:00:00.003936', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:48:34.215350 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:48:24.447964', 'end': '2026-03-29 00:48:24.452648', 'delta': '0:00:00.004684', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:48:34.215362 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-29 00:48:24.668530', 'end': '2026-03-29 00:48:24.673807', 'delta': '0:00:00.005277', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-29 00:48:34.215368 | orchestrator | 2026-03-29 00:48:34.215372 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-29 00:48:34.215377 | orchestrator | Sunday 29 March 2026 00:48:27 +0000 (0:00:02.721) 0:00:09.027 ********** 2026-03-29 00:48:34.215382 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-29 00:48:34.215387 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-29 00:48:34.215391 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-29 00:48:34.215396 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-29 00:48:34.215400 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-29 00:48:34.215405 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-29 00:48:34.215409 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-29 00:48:34.215414 | orchestrator | 2026-03-29 00:48:34.215418 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-29 00:48:34.215423 | orchestrator | Sunday 29 March 2026 00:48:29 +0000 (0:00:01.901) 0:00:10.929 ********** 2026-03-29 00:48:34.215427 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-29 00:48:34.215432 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-29 00:48:34.215436 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-29 00:48:34.215441 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-29 00:48:34.215445 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-29 00:48:34.215450 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-29 00:48:34.215455 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-29 00:48:34.215459 | orchestrator | 2026-03-29 00:48:34.215464 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:48:34.215472 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:34.215478 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:34.215482 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:34.215487 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:34.215495 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:34.215500 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:34.215505 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:48:34.215509 | orchestrator | 2026-03-29 00:48:34.215514 | orchestrator | 2026-03-29 00:48:34.215518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:48:34.215523 | orchestrator | Sunday 29 March 2026 00:48:32 +0000 (0:00:03.248) 0:00:14.178 ********** 2026-03-29 00:48:34.215527 | orchestrator | =============================================================================== 2026-03-29 00:48:34.215532 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.86s 2026-03-29 00:48:34.215536 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.25s 2026-03-29 00:48:34.215541 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.72s 2026-03-29 00:48:34.215546 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.90s 2026-03-29 00:48:34.215550 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.66s 2026-03-29 00:48:34.215555 | orchestrator | 2026-03-29 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:37.415855 | orchestrator | 2026-03-29 00:48:37 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:37.415994 | orchestrator | 2026-03-29 00:48:37 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:37.416005 | orchestrator | 2026-03-29 00:48:37 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:37.416012 | orchestrator | 2026-03-29 00:48:37 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:37.416018 | orchestrator | 2026-03-29 00:48:37 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:37.416024 | orchestrator | 2026-03-29 00:48:37 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:37.416030 | orchestrator | 2026-03-29 00:48:37 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:37.416037 | orchestrator | 2026-03-29 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:40.580262 | orchestrator | 2026-03-29 00:48:40 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:40.580339 | orchestrator | 2026-03-29 00:48:40 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:40.580346 | orchestrator | 2026-03-29 00:48:40 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:40.580350 | orchestrator | 2026-03-29 00:48:40 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:40.580355 | orchestrator | 2026-03-29 00:48:40 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:40.580359 | orchestrator | 2026-03-29 00:48:40 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:40.580363 | orchestrator | 2026-03-29 00:48:40 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:40.580367 | orchestrator | 2026-03-29 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:43.496181 | orchestrator | 2026-03-29 00:48:43 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:43.496242 | orchestrator | 2026-03-29 00:48:43 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:43.496247 | orchestrator | 2026-03-29 00:48:43 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:43.496251 | orchestrator | 2026-03-29 00:48:43 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:43.496256 | orchestrator | 2026-03-29 00:48:43 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:43.496260 | orchestrator | 2026-03-29 00:48:43 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:43.496264 | orchestrator | 2026-03-29 00:48:43 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:43.496267 | orchestrator | 2026-03-29 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:46.519509 | orchestrator | 2026-03-29 00:48:46 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:46.522557 | orchestrator | 2026-03-29 00:48:46 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:46.525442 | orchestrator | 2026-03-29 00:48:46 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:46.526689 | orchestrator | 2026-03-29 00:48:46 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:46.527175 | orchestrator | 2026-03-29 00:48:46 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:46.530364 | orchestrator | 2026-03-29 00:48:46 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:46.581497 | orchestrator | 2026-03-29 00:48:46 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:46.581585 | orchestrator | 2026-03-29 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:49.582873 | orchestrator | 2026-03-29 00:48:49 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:49.583109 | orchestrator | 2026-03-29 00:48:49 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:49.584092 | orchestrator | 2026-03-29 00:48:49 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:49.584152 | orchestrator | 2026-03-29 00:48:49 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:49.584709 | orchestrator | 2026-03-29 00:48:49 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:49.586376 | orchestrator | 2026-03-29 00:48:49 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:49.587285 | orchestrator | 2026-03-29 00:48:49 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:49.587344 | orchestrator | 2026-03-29 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:52.735801 | orchestrator | 2026-03-29 00:48:52 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:52.735923 | orchestrator | 2026-03-29 00:48:52 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:52.735930 | orchestrator | 2026-03-29 00:48:52 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:52.735978 | orchestrator | 2026-03-29 00:48:52 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:52.735986 | orchestrator | 2026-03-29 00:48:52 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:52.736018 | orchestrator | 2026-03-29 00:48:52 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:52.736027 | orchestrator | 2026-03-29 00:48:52 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:52.736034 | orchestrator | 2026-03-29 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:55.727922 | orchestrator | 2026-03-29 00:48:55 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:55.727991 | orchestrator | 2026-03-29 00:48:55 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:55.727999 | orchestrator | 2026-03-29 00:48:55 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:55.728006 | orchestrator | 2026-03-29 00:48:55 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:55.728012 | orchestrator | 2026-03-29 00:48:55 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:55.728017 | orchestrator | 2026-03-29 00:48:55 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:55.728023 | orchestrator | 2026-03-29 00:48:55 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:55.728030 | orchestrator | 2026-03-29 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:48:58.790110 | orchestrator | 2026-03-29 00:48:58 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:48:58.790167 | orchestrator | 2026-03-29 00:48:58 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:48:58.790175 | orchestrator | 2026-03-29 00:48:58 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:48:58.790181 | orchestrator | 2026-03-29 00:48:58 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:48:58.790187 | orchestrator | 2026-03-29 00:48:58 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:48:58.790193 | orchestrator | 2026-03-29 00:48:58 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:48:58.790199 | orchestrator | 2026-03-29 00:48:58 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:48:58.790205 | orchestrator | 2026-03-29 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:01.844316 | orchestrator | 2026-03-29 00:49:01 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:01.844377 | orchestrator | 2026-03-29 00:49:01 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:01.844385 | orchestrator | 2026-03-29 00:49:01 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:49:01.844391 | orchestrator | 2026-03-29 00:49:01 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:01.844396 | orchestrator | 2026-03-29 00:49:01 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:01.844402 | orchestrator | 2026-03-29 00:49:01 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:49:01.844405 | orchestrator | 2026-03-29 00:49:01 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:01.844409 | orchestrator | 2026-03-29 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:04.861284 | orchestrator | 2026-03-29 00:49:04 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:04.861339 | orchestrator | 2026-03-29 00:49:04 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:04.861430 | orchestrator | 2026-03-29 00:49:04 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state STARTED 2026-03-29 00:49:04.863442 | orchestrator | 2026-03-29 00:49:04 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:04.866134 | orchestrator | 2026-03-29 00:49:04 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:04.866941 | orchestrator | 2026-03-29 00:49:04 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:49:04.868384 | orchestrator | 2026-03-29 00:49:04 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:04.868435 | orchestrator | 2026-03-29 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:07.908655 | orchestrator | 2026-03-29 00:49:07 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:07.910778 | orchestrator | 2026-03-29 00:49:07 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:07.913167 | orchestrator | 2026-03-29 00:49:07 | INFO  | Task a685864d-a899-4af8-bb74-630e3964d66c is in state SUCCESS 2026-03-29 00:49:07.917682 | orchestrator | 2026-03-29 00:49:07 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:07.920147 | orchestrator | 2026-03-29 00:49:07 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:07.925004 | orchestrator | 2026-03-29 00:49:07 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state STARTED 2026-03-29 00:49:07.926933 | orchestrator | 2026-03-29 00:49:07 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:07.927650 | orchestrator | 2026-03-29 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:10.969072 | orchestrator | 2026-03-29 00:49:10 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:10.973366 | orchestrator | 2026-03-29 00:49:10 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:10.973445 | orchestrator | 2026-03-29 00:49:10 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:10.973654 | orchestrator | 2026-03-29 00:49:10 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:10.977997 | orchestrator | 2026-03-29 00:49:10 | INFO  | Task 72d58561-ff2a-4883-b520-1ad91e05e1b2 is in state SUCCESS 2026-03-29 00:49:10.979443 | orchestrator | 2026-03-29 00:49:10 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:10.979485 | orchestrator | 2026-03-29 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:14.014291 | orchestrator | 2026-03-29 00:49:14 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:14.015459 | orchestrator | 2026-03-29 00:49:14 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:14.016905 | orchestrator | 2026-03-29 00:49:14 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:14.017820 | orchestrator | 2026-03-29 00:49:14 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:14.019588 | orchestrator | 2026-03-29 00:49:14 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:14.019907 | orchestrator | 2026-03-29 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:17.086163 | orchestrator | 2026-03-29 00:49:17 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:17.086822 | orchestrator | 2026-03-29 00:49:17 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:17.088713 | orchestrator | 2026-03-29 00:49:17 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:17.089837 | orchestrator | 2026-03-29 00:49:17 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:17.094178 | orchestrator | 2026-03-29 00:49:17 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:17.094225 | orchestrator | 2026-03-29 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:20.154091 | orchestrator | 2026-03-29 00:49:20 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:20.155265 | orchestrator | 2026-03-29 00:49:20 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:20.156149 | orchestrator | 2026-03-29 00:49:20 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:20.157143 | orchestrator | 2026-03-29 00:49:20 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:20.158206 | orchestrator | 2026-03-29 00:49:20 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:20.158257 | orchestrator | 2026-03-29 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:23.204155 | orchestrator | 2026-03-29 00:49:23 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:23.204559 | orchestrator | 2026-03-29 00:49:23 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:23.207827 | orchestrator | 2026-03-29 00:49:23 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:23.208482 | orchestrator | 2026-03-29 00:49:23 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:23.209801 | orchestrator | 2026-03-29 00:49:23 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:23.209841 | orchestrator | 2026-03-29 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:26.253210 | orchestrator | 2026-03-29 00:49:26 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:26.255068 | orchestrator | 2026-03-29 00:49:26 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:26.261607 | orchestrator | 2026-03-29 00:49:26 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:26.261663 | orchestrator | 2026-03-29 00:49:26 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:26.261672 | orchestrator | 2026-03-29 00:49:26 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:26.261680 | orchestrator | 2026-03-29 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:29.295250 | orchestrator | 2026-03-29 00:49:29 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:29.297041 | orchestrator | 2026-03-29 00:49:29 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:29.298998 | orchestrator | 2026-03-29 00:49:29 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:29.302104 | orchestrator | 2026-03-29 00:49:29 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:29.303948 | orchestrator | 2026-03-29 00:49:29 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:29.304084 | orchestrator | 2026-03-29 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:32.346053 | orchestrator | 2026-03-29 00:49:32 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:32.347298 | orchestrator | 2026-03-29 00:49:32 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:32.350556 | orchestrator | 2026-03-29 00:49:32 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:32.359354 | orchestrator | 2026-03-29 00:49:32 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:32.362363 | orchestrator | 2026-03-29 00:49:32 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:32.362511 | orchestrator | 2026-03-29 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:35.437700 | orchestrator | 2026-03-29 00:49:35 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:35.461776 | orchestrator | 2026-03-29 00:49:35 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:35.462940 | orchestrator | 2026-03-29 00:49:35 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:35.464499 | orchestrator | 2026-03-29 00:49:35 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:35.468944 | orchestrator | 2026-03-29 00:49:35 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:35.469013 | orchestrator | 2026-03-29 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:38.528607 | orchestrator | 2026-03-29 00:49:38 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:38.541529 | orchestrator | 2026-03-29 00:49:38 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:38.541731 | orchestrator | 2026-03-29 00:49:38 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:38.541770 | orchestrator | 2026-03-29 00:49:38 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:38.549017 | orchestrator | 2026-03-29 00:49:38 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:38.549096 | orchestrator | 2026-03-29 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:41.603521 | orchestrator | 2026-03-29 00:49:41 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:41.604286 | orchestrator | 2026-03-29 00:49:41 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:41.606262 | orchestrator | 2026-03-29 00:49:41 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:41.608046 | orchestrator | 2026-03-29 00:49:41 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:41.609956 | orchestrator | 2026-03-29 00:49:41 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:41.610078 | orchestrator | 2026-03-29 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:44.650696 | orchestrator | 2026-03-29 00:49:44 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state STARTED 2026-03-29 00:49:44.657544 | orchestrator | 2026-03-29 00:49:44 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:44.657596 | orchestrator | 2026-03-29 00:49:44 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:44.657618 | orchestrator | 2026-03-29 00:49:44 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:44.657817 | orchestrator | 2026-03-29 00:49:44 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:44.657911 | orchestrator | 2026-03-29 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:47.730123 | orchestrator | 2026-03-29 00:49:47 | INFO  | Task b20ab409-e06c-48c7-ab80-4f13dadfecd7 is in state SUCCESS 2026-03-29 00:49:47.731929 | orchestrator | 2026-03-29 00:49:47.731990 | orchestrator | 2026-03-29 00:49:47.732002 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-29 00:49:47.732010 | orchestrator | 2026-03-29 00:49:47.732019 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-29 00:49:47.732027 | orchestrator | Sunday 29 March 2026 00:48:20 +0000 (0:00:00.922) 0:00:00.922 ********** 2026-03-29 00:49:47.732034 | orchestrator | ok: [testbed-manager] => { 2026-03-29 00:49:47.732043 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-29 00:49:47.732052 | orchestrator | } 2026-03-29 00:49:47.732060 | orchestrator | 2026-03-29 00:49:47.732068 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-29 00:49:47.732075 | orchestrator | Sunday 29 March 2026 00:48:20 +0000 (0:00:00.232) 0:00:01.155 ********** 2026-03-29 00:49:47.732083 | orchestrator | ok: [testbed-manager] 2026-03-29 00:49:47.732091 | orchestrator | 2026-03-29 00:49:47.732099 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-29 00:49:47.732107 | orchestrator | Sunday 29 March 2026 00:48:22 +0000 (0:00:02.699) 0:00:03.855 ********** 2026-03-29 00:49:47.732115 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-29 00:49:47.732123 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-29 00:49:47.732130 | orchestrator | 2026-03-29 00:49:47.732138 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-29 00:49:47.732145 | orchestrator | Sunday 29 March 2026 00:48:25 +0000 (0:00:02.308) 0:00:06.163 ********** 2026-03-29 00:49:47.732153 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732161 | orchestrator | 2026-03-29 00:49:47.732168 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-29 00:49:47.732176 | orchestrator | Sunday 29 March 2026 00:48:29 +0000 (0:00:03.995) 0:00:10.158 ********** 2026-03-29 00:49:47.732184 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732191 | orchestrator | 2026-03-29 00:49:47.732199 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-29 00:49:47.732207 | orchestrator | Sunday 29 March 2026 00:48:32 +0000 (0:00:02.738) 0:00:12.897 ********** 2026-03-29 00:49:47.732215 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-29 00:49:47.732222 | orchestrator | ok: [testbed-manager] 2026-03-29 00:49:47.732230 | orchestrator | 2026-03-29 00:49:47.732237 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-29 00:49:47.732245 | orchestrator | Sunday 29 March 2026 00:49:00 +0000 (0:00:28.493) 0:00:41.390 ********** 2026-03-29 00:49:47.732253 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732261 | orchestrator | 2026-03-29 00:49:47.732268 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:49:47.732277 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:47.732285 | orchestrator | 2026-03-29 00:49:47.732293 | orchestrator | 2026-03-29 00:49:47.732302 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:49:47.732309 | orchestrator | Sunday 29 March 2026 00:49:04 +0000 (0:00:03.826) 0:00:45.217 ********** 2026-03-29 00:49:47.732317 | orchestrator | =============================================================================== 2026-03-29 00:49:47.732341 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.49s 2026-03-29 00:49:47.732349 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.00s 2026-03-29 00:49:47.732357 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.83s 2026-03-29 00:49:47.732373 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.74s 2026-03-29 00:49:47.732381 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.70s 2026-03-29 00:49:47.732388 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.31s 2026-03-29 00:49:47.732396 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.23s 2026-03-29 00:49:47.732403 | orchestrator | 2026-03-29 00:49:47.732411 | orchestrator | 2026-03-29 00:49:47.732418 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-29 00:49:47.732425 | orchestrator | 2026-03-29 00:49:47.732433 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-29 00:49:47.732440 | orchestrator | Sunday 29 March 2026 00:48:19 +0000 (0:00:00.672) 0:00:00.672 ********** 2026-03-29 00:49:47.732448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-29 00:49:47.732457 | orchestrator | 2026-03-29 00:49:47.732464 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-29 00:49:47.732472 | orchestrator | Sunday 29 March 2026 00:48:19 +0000 (0:00:00.667) 0:00:01.340 ********** 2026-03-29 00:49:47.732479 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-29 00:49:47.732487 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-29 00:49:47.732494 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-29 00:49:47.732502 | orchestrator | 2026-03-29 00:49:47.732509 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-29 00:49:47.732517 | orchestrator | Sunday 29 March 2026 00:48:23 +0000 (0:00:03.065) 0:00:04.406 ********** 2026-03-29 00:49:47.732524 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732532 | orchestrator | 2026-03-29 00:49:47.732541 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-29 00:49:47.732549 | orchestrator | Sunday 29 March 2026 00:48:25 +0000 (0:00:02.735) 0:00:07.141 ********** 2026-03-29 00:49:47.732569 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-29 00:49:47.732578 | orchestrator | ok: [testbed-manager] 2026-03-29 00:49:47.732585 | orchestrator | 2026-03-29 00:49:47.732593 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-29 00:49:47.732600 | orchestrator | Sunday 29 March 2026 00:49:01 +0000 (0:00:36.116) 0:00:43.257 ********** 2026-03-29 00:49:47.732607 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732615 | orchestrator | 2026-03-29 00:49:47.732622 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-29 00:49:47.732630 | orchestrator | Sunday 29 March 2026 00:49:02 +0000 (0:00:00.910) 0:00:44.168 ********** 2026-03-29 00:49:47.732637 | orchestrator | ok: [testbed-manager] 2026-03-29 00:49:47.732645 | orchestrator | 2026-03-29 00:49:47.732652 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-29 00:49:47.732660 | orchestrator | Sunday 29 March 2026 00:49:03 +0000 (0:00:00.687) 0:00:44.856 ********** 2026-03-29 00:49:47.732668 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732675 | orchestrator | 2026-03-29 00:49:47.732683 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-29 00:49:47.732690 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:02.352) 0:00:47.209 ********** 2026-03-29 00:49:47.732698 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732705 | orchestrator | 2026-03-29 00:49:47.732718 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-29 00:49:47.732726 | orchestrator | Sunday 29 March 2026 00:49:07 +0000 (0:00:01.549) 0:00:48.758 ********** 2026-03-29 00:49:47.732733 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.732740 | orchestrator | 2026-03-29 00:49:47.732748 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-29 00:49:47.732756 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:00.876) 0:00:49.635 ********** 2026-03-29 00:49:47.732763 | orchestrator | ok: [testbed-manager] 2026-03-29 00:49:47.732771 | orchestrator | 2026-03-29 00:49:47.732778 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:49:47.732785 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:47.732793 | orchestrator | 2026-03-29 00:49:47.732800 | orchestrator | 2026-03-29 00:49:47.732807 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:49:47.732815 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:00.469) 0:00:50.105 ********** 2026-03-29 00:49:47.732823 | orchestrator | =============================================================================== 2026-03-29 00:49:47.732830 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.12s 2026-03-29 00:49:47.732837 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.07s 2026-03-29 00:49:47.732845 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.74s 2026-03-29 00:49:47.732852 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.35s 2026-03-29 00:49:47.732860 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.55s 2026-03-29 00:49:47.732867 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.91s 2026-03-29 00:49:47.732875 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.88s 2026-03-29 00:49:47.732882 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.69s 2026-03-29 00:49:47.732890 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.67s 2026-03-29 00:49:47.732897 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2026-03-29 00:49:47.732904 | orchestrator | 2026-03-29 00:49:47.732911 | orchestrator | 2026-03-29 00:49:47.732922 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-29 00:49:47.732929 | orchestrator | 2026-03-29 00:49:47.732937 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-29 00:49:47.732944 | orchestrator | Sunday 29 March 2026 00:48:40 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-29 00:49:47.732950 | orchestrator | ok: [testbed-manager] 2026-03-29 00:49:47.732957 | orchestrator | 2026-03-29 00:49:47.732964 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-29 00:49:47.732971 | orchestrator | Sunday 29 March 2026 00:48:41 +0000 (0:00:01.586) 0:00:01.862 ********** 2026-03-29 00:49:47.732992 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-29 00:49:47.733000 | orchestrator | 2026-03-29 00:49:47.733007 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-29 00:49:47.733013 | orchestrator | Sunday 29 March 2026 00:48:42 +0000 (0:00:00.614) 0:00:02.477 ********** 2026-03-29 00:49:47.733020 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.733027 | orchestrator | 2026-03-29 00:49:47.733034 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-29 00:49:47.733041 | orchestrator | Sunday 29 March 2026 00:48:43 +0000 (0:00:01.005) 0:00:03.482 ********** 2026-03-29 00:49:47.733047 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-29 00:49:47.733054 | orchestrator | ok: [testbed-manager] 2026-03-29 00:49:47.733061 | orchestrator | 2026-03-29 00:49:47.733068 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-29 00:49:47.733080 | orchestrator | Sunday 29 March 2026 00:49:38 +0000 (0:00:54.631) 0:00:58.114 ********** 2026-03-29 00:49:47.733087 | orchestrator | changed: [testbed-manager] 2026-03-29 00:49:47.733094 | orchestrator | 2026-03-29 00:49:47.733101 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:49:47.733110 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:49:47.733117 | orchestrator | 2026-03-29 00:49:47.733124 | orchestrator | 2026-03-29 00:49:47.733130 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:49:47.733144 | orchestrator | Sunday 29 March 2026 00:49:44 +0000 (0:00:06.107) 0:01:04.221 ********** 2026-03-29 00:49:47.733151 | orchestrator | =============================================================================== 2026-03-29 00:49:47.733158 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.63s 2026-03-29 00:49:47.733164 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.11s 2026-03-29 00:49:47.733171 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.59s 2026-03-29 00:49:47.733178 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.01s 2026-03-29 00:49:47.733184 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.61s 2026-03-29 00:49:47.734117 | orchestrator | 2026-03-29 00:49:47 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:47.734584 | orchestrator | 2026-03-29 00:49:47 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:47.735188 | orchestrator | 2026-03-29 00:49:47 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:47.735797 | orchestrator | 2026-03-29 00:49:47 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:47.735864 | orchestrator | 2026-03-29 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:50.779530 | orchestrator | 2026-03-29 00:49:50 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:50.780834 | orchestrator | 2026-03-29 00:49:50 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:50.783421 | orchestrator | 2026-03-29 00:49:50 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:50.785855 | orchestrator | 2026-03-29 00:49:50 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:50.785892 | orchestrator | 2026-03-29 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:53.821762 | orchestrator | 2026-03-29 00:49:53 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:53.824124 | orchestrator | 2026-03-29 00:49:53 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:53.826128 | orchestrator | 2026-03-29 00:49:53 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:53.827751 | orchestrator | 2026-03-29 00:49:53 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:53.828480 | orchestrator | 2026-03-29 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:56.864267 | orchestrator | 2026-03-29 00:49:56 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:56.865994 | orchestrator | 2026-03-29 00:49:56 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:56.868615 | orchestrator | 2026-03-29 00:49:56 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:56.870600 | orchestrator | 2026-03-29 00:49:56 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:56.870666 | orchestrator | 2026-03-29 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:49:59.912475 | orchestrator | 2026-03-29 00:49:59 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:49:59.915141 | orchestrator | 2026-03-29 00:49:59 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:49:59.919030 | orchestrator | 2026-03-29 00:49:59 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:49:59.920376 | orchestrator | 2026-03-29 00:49:59 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:49:59.920413 | orchestrator | 2026-03-29 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:02.963981 | orchestrator | 2026-03-29 00:50:02 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state STARTED 2026-03-29 00:50:02.965244 | orchestrator | 2026-03-29 00:50:02 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:02.967023 | orchestrator | 2026-03-29 00:50:02 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:02.968329 | orchestrator | 2026-03-29 00:50:02 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:02.968823 | orchestrator | 2026-03-29 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:06.010969 | orchestrator | 2026-03-29 00:50:06.011128 | orchestrator | 2026-03-29 00:50:06.011139 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:50:06.011143 | orchestrator | 2026-03-29 00:50:06.011147 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:50:06.011150 | orchestrator | Sunday 29 March 2026 00:48:21 +0000 (0:00:00.210) 0:00:00.210 ********** 2026-03-29 00:50:06.011155 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-29 00:50:06.011159 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-29 00:50:06.011162 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-29 00:50:06.011165 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-29 00:50:06.011169 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-29 00:50:06.011172 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-29 00:50:06.011175 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-29 00:50:06.011178 | orchestrator | 2026-03-29 00:50:06.011181 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-29 00:50:06.011184 | orchestrator | 2026-03-29 00:50:06.011188 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-29 00:50:06.011191 | orchestrator | Sunday 29 March 2026 00:48:22 +0000 (0:00:00.720) 0:00:00.931 ********** 2026-03-29 00:50:06.011204 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:50:06.011208 | orchestrator | 2026-03-29 00:50:06.011211 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-29 00:50:06.011214 | orchestrator | Sunday 29 March 2026 00:48:23 +0000 (0:00:01.922) 0:00:02.853 ********** 2026-03-29 00:50:06.011226 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:50:06.011234 | orchestrator | ok: [testbed-manager] 2026-03-29 00:50:06.011240 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:50:06.011245 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:50:06.011253 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:50:06.011258 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:50:06.011263 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:50:06.011281 | orchestrator | 2026-03-29 00:50:06.011286 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-29 00:50:06.011291 | orchestrator | Sunday 29 March 2026 00:48:26 +0000 (0:00:03.032) 0:00:05.885 ********** 2026-03-29 00:50:06.011296 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:50:06.011300 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:50:06.011304 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:50:06.011309 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:50:06.011314 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:50:06.011320 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:50:06.011325 | orchestrator | ok: [testbed-manager] 2026-03-29 00:50:06.011330 | orchestrator | 2026-03-29 00:50:06.011333 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-29 00:50:06.011336 | orchestrator | Sunday 29 March 2026 00:48:30 +0000 (0:00:03.754) 0:00:09.640 ********** 2026-03-29 00:50:06.011339 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:06.011343 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:06.011346 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:06.011349 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:06.011352 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:06.011355 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:06.011358 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:06.011362 | orchestrator | 2026-03-29 00:50:06.011365 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-29 00:50:06.011368 | orchestrator | Sunday 29 March 2026 00:48:34 +0000 (0:00:03.985) 0:00:13.625 ********** 2026-03-29 00:50:06.011371 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:06.011374 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:06.011384 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:06.011402 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:06.011408 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:06.011413 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:06.011417 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:06.011422 | orchestrator | 2026-03-29 00:50:06.011427 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-29 00:50:06.011432 | orchestrator | Sunday 29 March 2026 00:48:45 +0000 (0:00:10.794) 0:00:24.419 ********** 2026-03-29 00:50:06.011438 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:06.011443 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:06.011448 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:06.011451 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:06.011454 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:06.011457 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:06.011460 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:06.011464 | orchestrator | 2026-03-29 00:50:06.011467 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-29 00:50:06.011470 | orchestrator | Sunday 29 March 2026 00:49:33 +0000 (0:00:47.758) 0:01:12.177 ********** 2026-03-29 00:50:06.011474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:50:06.011478 | orchestrator | 2026-03-29 00:50:06.011482 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-29 00:50:06.011485 | orchestrator | Sunday 29 March 2026 00:49:34 +0000 (0:00:01.569) 0:01:13.747 ********** 2026-03-29 00:50:06.011488 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-29 00:50:06.011492 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-29 00:50:06.011495 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-29 00:50:06.011498 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-29 00:50:06.011510 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-29 00:50:06.011514 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-29 00:50:06.011521 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-29 00:50:06.011524 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-29 00:50:06.011527 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-29 00:50:06.011530 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-29 00:50:06.011534 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-29 00:50:06.011537 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-29 00:50:06.011540 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-29 00:50:06.011543 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-29 00:50:06.011546 | orchestrator | 2026-03-29 00:50:06.011549 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-29 00:50:06.011553 | orchestrator | Sunday 29 March 2026 00:49:41 +0000 (0:00:06.735) 0:01:20.482 ********** 2026-03-29 00:50:06.011556 | orchestrator | ok: [testbed-manager] 2026-03-29 00:50:06.011560 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:50:06.011563 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:50:06.011566 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:50:06.011569 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:50:06.011572 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:50:06.011575 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:50:06.011579 | orchestrator | 2026-03-29 00:50:06.011582 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-29 00:50:06.011585 | orchestrator | Sunday 29 March 2026 00:49:42 +0000 (0:00:01.345) 0:01:21.828 ********** 2026-03-29 00:50:06.011588 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:06.011591 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:06.011595 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:06.011598 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:06.011601 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:06.011604 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:06.011607 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:06.011611 | orchestrator | 2026-03-29 00:50:06.011614 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-29 00:50:06.011617 | orchestrator | Sunday 29 March 2026 00:49:44 +0000 (0:00:01.714) 0:01:23.543 ********** 2026-03-29 00:50:06.011620 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:50:06.011623 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:50:06.011627 | orchestrator | ok: [testbed-manager] 2026-03-29 00:50:06.011630 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:50:06.011633 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:50:06.011636 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:50:06.011639 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:50:06.011642 | orchestrator | 2026-03-29 00:50:06.011646 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-29 00:50:06.011649 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:01.873) 0:01:25.417 ********** 2026-03-29 00:50:06.011652 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:50:06.011655 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:50:06.011658 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:50:06.011661 | orchestrator | ok: [testbed-manager] 2026-03-29 00:50:06.011665 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:50:06.011668 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:50:06.011671 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:50:06.011674 | orchestrator | 2026-03-29 00:50:06.011677 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-29 00:50:06.011680 | orchestrator | Sunday 29 March 2026 00:49:48 +0000 (0:00:02.387) 0:01:27.804 ********** 2026-03-29 00:50:06.011684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-29 00:50:06.011689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:50:06.011695 | orchestrator | 2026-03-29 00:50:06.011701 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-29 00:50:06.011704 | orchestrator | Sunday 29 March 2026 00:49:50 +0000 (0:00:01.407) 0:01:29.212 ********** 2026-03-29 00:50:06.011707 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:06.011710 | orchestrator | 2026-03-29 00:50:06.011714 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-29 00:50:06.011717 | orchestrator | Sunday 29 March 2026 00:49:52 +0000 (0:00:02.104) 0:01:31.316 ********** 2026-03-29 00:50:06.011720 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:06.011723 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:06.011726 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:06.011730 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:06.011733 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:06.011736 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:06.011739 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:06.011742 | orchestrator | 2026-03-29 00:50:06.011746 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:50:06.011749 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:50:06.011753 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:50:06.011756 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:50:06.011759 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:50:06.011766 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:50:06.011770 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:50:06.011773 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:50:06.011777 | orchestrator | 2026-03-29 00:50:06.011781 | orchestrator | 2026-03-29 00:50:06.011785 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:50:06.011789 | orchestrator | Sunday 29 March 2026 00:50:03 +0000 (0:00:11.111) 0:01:42.428 ********** 2026-03-29 00:50:06.011793 | orchestrator | =============================================================================== 2026-03-29 00:50:06.011797 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 47.76s 2026-03-29 00:50:06.011800 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.11s 2026-03-29 00:50:06.011804 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.79s 2026-03-29 00:50:06.011808 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.74s 2026-03-29 00:50:06.011812 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.99s 2026-03-29 00:50:06.011816 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.75s 2026-03-29 00:50:06.011819 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.03s 2026-03-29 00:50:06.011823 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.39s 2026-03-29 00:50:06.011827 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.10s 2026-03-29 00:50:06.011831 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.92s 2026-03-29 00:50:06.011835 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.87s 2026-03-29 00:50:06.011842 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.71s 2026-03-29 00:50:06.011846 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.57s 2026-03-29 00:50:06.011849 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.41s 2026-03-29 00:50:06.011853 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.35s 2026-03-29 00:50:06.011857 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-03-29 00:50:06.011861 | orchestrator | 2026-03-29 00:50:06 | INFO  | Task a73a23c4-00be-45aa-9880-fc6952253e4f is in state SUCCESS 2026-03-29 00:50:06.011865 | orchestrator | 2026-03-29 00:50:06 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:06.012253 | orchestrator | 2026-03-29 00:50:06 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:06.013380 | orchestrator | 2026-03-29 00:50:06 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:06.014686 | orchestrator | 2026-03-29 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:09.081512 | orchestrator | 2026-03-29 00:50:09 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:09.081566 | orchestrator | 2026-03-29 00:50:09 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:09.081575 | orchestrator | 2026-03-29 00:50:09 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:09.081583 | orchestrator | 2026-03-29 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:12.128524 | orchestrator | 2026-03-29 00:50:12 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:12.129211 | orchestrator | 2026-03-29 00:50:12 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:12.130567 | orchestrator | 2026-03-29 00:50:12 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:12.130601 | orchestrator | 2026-03-29 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:15.169648 | orchestrator | 2026-03-29 00:50:15 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:15.173502 | orchestrator | 2026-03-29 00:50:15 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:15.184765 | orchestrator | 2026-03-29 00:50:15 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:15.184829 | orchestrator | 2026-03-29 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:18.256798 | orchestrator | 2026-03-29 00:50:18 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:18.259345 | orchestrator | 2026-03-29 00:50:18 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:18.260818 | orchestrator | 2026-03-29 00:50:18 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:18.260880 | orchestrator | 2026-03-29 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:21.320542 | orchestrator | 2026-03-29 00:50:21 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:21.322539 | orchestrator | 2026-03-29 00:50:21 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:21.324156 | orchestrator | 2026-03-29 00:50:21 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:21.325411 | orchestrator | 2026-03-29 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:24.365086 | orchestrator | 2026-03-29 00:50:24 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:24.368386 | orchestrator | 2026-03-29 00:50:24 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:24.369642 | orchestrator | 2026-03-29 00:50:24 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:24.369689 | orchestrator | 2026-03-29 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:27.403162 | orchestrator | 2026-03-29 00:50:27 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:27.403362 | orchestrator | 2026-03-29 00:50:27 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:27.404258 | orchestrator | 2026-03-29 00:50:27 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:27.404291 | orchestrator | 2026-03-29 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:30.431220 | orchestrator | 2026-03-29 00:50:30 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:30.431706 | orchestrator | 2026-03-29 00:50:30 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:30.432532 | orchestrator | 2026-03-29 00:50:30 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:30.432565 | orchestrator | 2026-03-29 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:33.469533 | orchestrator | 2026-03-29 00:50:33 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:33.470749 | orchestrator | 2026-03-29 00:50:33 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:33.472227 | orchestrator | 2026-03-29 00:50:33 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:33.472272 | orchestrator | 2026-03-29 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:36.517446 | orchestrator | 2026-03-29 00:50:36 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:36.519371 | orchestrator | 2026-03-29 00:50:36 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:36.520568 | orchestrator | 2026-03-29 00:50:36 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state STARTED 2026-03-29 00:50:36.520601 | orchestrator | 2026-03-29 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:39.570159 | orchestrator | 2026-03-29 00:50:39 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:39.570353 | orchestrator | 2026-03-29 00:50:39 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:39.577384 | orchestrator | 2026-03-29 00:50:39.577443 | orchestrator | 2026-03-29 00:50:39 | INFO  | Task 6bebf354-6e9d-4c77-843d-80478cb84021 is in state SUCCESS 2026-03-29 00:50:39.579800 | orchestrator | 2026-03-29 00:50:39.579849 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-29 00:50:39.579861 | orchestrator | 2026-03-29 00:50:39.579872 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 00:50:39.579882 | orchestrator | Sunday 29 March 2026 00:48:11 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-29 00:50:39.579893 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:50:39.579903 | orchestrator | 2026-03-29 00:50:39.579913 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-29 00:50:39.579939 | orchestrator | Sunday 29 March 2026 00:48:13 +0000 (0:00:01.675) 0:00:01.906 ********** 2026-03-29 00:50:39.579948 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:50:39.579958 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:50:39.579967 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:50:39.579977 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:50:39.580001 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:50:39.580012 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:50:39.580022 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:50:39.580032 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:50:39.580038 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:50:39.580043 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:50:39.580049 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:50:39.580055 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:50:39.580060 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-29 00:50:39.580066 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:50:39.580072 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:50:39.580078 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:50:39.580086 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:50:39.580096 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:50:39.580105 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:50:39.580114 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-29 00:50:39.580125 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-29 00:50:39.580134 | orchestrator | 2026-03-29 00:50:39.580144 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-29 00:50:39.580150 | orchestrator | Sunday 29 March 2026 00:48:17 +0000 (0:00:04.261) 0:00:06.167 ********** 2026-03-29 00:50:39.580156 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:50:39.580162 | orchestrator | 2026-03-29 00:50:39.580170 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-29 00:50:39.580180 | orchestrator | Sunday 29 March 2026 00:48:18 +0000 (0:00:01.400) 0:00:07.568 ********** 2026-03-29 00:50:39.580202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.580216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.580247 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.580259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.580270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.580281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.580288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580319 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580331 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.580348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580388 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580410 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.580416 | orchestrator | 2026-03-29 00:50:39.580423 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-29 00:50:39.580430 | orchestrator | Sunday 29 March 2026 00:48:24 +0000 (0:00:06.048) 0:00:13.616 ********** 2026-03-29 00:50:39.580438 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580445 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580457 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580489 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:50:39.580497 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:50:39.580504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580536 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:50:39.580546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580601 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:50:39.580612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580660 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:50:39.580672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580710 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:50:39.580730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580770 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:50:39.580779 | orchestrator | 2026-03-29 00:50:39.580790 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-29 00:50:39.580801 | orchestrator | Sunday 29 March 2026 00:48:27 +0000 (0:00:02.173) 0:00:15.790 ********** 2026-03-29 00:50:39.580811 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580823 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580840 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580851 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:50:39.580865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580903 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:50:39.580914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.580950 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:50:39.580961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.580972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581024 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:50:39.581041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.581053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.581097 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:50:39.581118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581141 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:50:39.581154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-29 00:50:39.581169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.581190 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:50:39.581200 | orchestrator | 2026-03-29 00:50:39.581210 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-29 00:50:39.581221 | orchestrator | Sunday 29 March 2026 00:48:30 +0000 (0:00:02.876) 0:00:18.667 ********** 2026-03-29 00:50:39.581231 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:50:39.581242 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:50:39.581251 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:50:39.581262 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:50:39.581272 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:50:39.581282 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:50:39.581292 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:50:39.581308 | orchestrator | 2026-03-29 00:50:39.581319 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-29 00:50:39.581329 | orchestrator | Sunday 29 March 2026 00:48:31 +0000 (0:00:01.911) 0:00:20.579 ********** 2026-03-29 00:50:39.581339 | orchestrator | skipping: [testbed-manager] 2026-03-29 00:50:39.581349 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:50:39.581360 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:50:39.581370 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:50:39.581380 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:50:39.581390 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:50:39.581400 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:50:39.581409 | orchestrator | 2026-03-29 00:50:39.581419 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-29 00:50:39.581429 | orchestrator | Sunday 29 March 2026 00:48:33 +0000 (0:00:01.227) 0:00:21.807 ********** 2026-03-29 00:50:39.581440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.581451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.581463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.581478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581499 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.581510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.581527 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.581538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.581574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581652 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581714 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.581724 | orchestrator | 2026-03-29 00:50:39.581733 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-29 00:50:39.581739 | orchestrator | Sunday 29 March 2026 00:48:41 +0000 (0:00:08.289) 0:00:30.097 ********** 2026-03-29 00:50:39.581745 | orchestrator | [WARNING]: Skipped 2026-03-29 00:50:39.581752 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-29 00:50:39.581758 | orchestrator | to this access issue: 2026-03-29 00:50:39.581764 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-29 00:50:39.581770 | orchestrator | directory 2026-03-29 00:50:39.581776 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:50:39.581781 | orchestrator | 2026-03-29 00:50:39.581787 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-29 00:50:39.581793 | orchestrator | Sunday 29 March 2026 00:48:42 +0000 (0:00:01.072) 0:00:31.170 ********** 2026-03-29 00:50:39.581809 | orchestrator | [WARNING]: Skipped 2026-03-29 00:50:39.581815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-29 00:50:39.581826 | orchestrator | to this access issue: 2026-03-29 00:50:39.581832 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-29 00:50:39.581838 | orchestrator | directory 2026-03-29 00:50:39.581843 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:50:39.581849 | orchestrator | 2026-03-29 00:50:39.581855 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-29 00:50:39.581861 | orchestrator | Sunday 29 March 2026 00:48:43 +0000 (0:00:00.845) 0:00:32.015 ********** 2026-03-29 00:50:39.581867 | orchestrator | [WARNING]: Skipped 2026-03-29 00:50:39.581872 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-29 00:50:39.581878 | orchestrator | to this access issue: 2026-03-29 00:50:39.581884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-29 00:50:39.581890 | orchestrator | directory 2026-03-29 00:50:39.581896 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:50:39.581902 | orchestrator | 2026-03-29 00:50:39.581907 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-29 00:50:39.581913 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.815) 0:00:32.831 ********** 2026-03-29 00:50:39.581919 | orchestrator | [WARNING]: Skipped 2026-03-29 00:50:39.581925 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-29 00:50:39.581931 | orchestrator | to this access issue: 2026-03-29 00:50:39.581937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-29 00:50:39.581942 | orchestrator | directory 2026-03-29 00:50:39.581948 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 00:50:39.581954 | orchestrator | 2026-03-29 00:50:39.581960 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-29 00:50:39.581965 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.792) 0:00:33.623 ********** 2026-03-29 00:50:39.581971 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:39.581977 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:39.581983 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:39.582187 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:39.582194 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:39.582200 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:39.582206 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:39.582217 | orchestrator | 2026-03-29 00:50:39.582224 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-29 00:50:39.582230 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:06.422) 0:00:40.046 ********** 2026-03-29 00:50:39.582236 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:50:39.582243 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:50:39.582248 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:50:39.582254 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:50:39.582264 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:50:39.582270 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:50:39.582275 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-29 00:50:39.582281 | orchestrator | 2026-03-29 00:50:39.582294 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-29 00:50:39.582300 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:03.291) 0:00:43.338 ********** 2026-03-29 00:50:39.582305 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:39.582311 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:39.582317 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:39.582323 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:39.582335 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:39.582341 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:39.582347 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:39.582353 | orchestrator | 2026-03-29 00:50:39.582359 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-29 00:50:39.582364 | orchestrator | Sunday 29 March 2026 00:48:58 +0000 (0:00:03.345) 0:00:46.684 ********** 2026-03-29 00:50:39.582371 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.582385 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582391 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.582401 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.582422 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582436 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582446 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582453 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.582469 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.582486 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582496 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582503 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.582515 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582521 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:50:39.582537 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582543 | orchestrator | 2026-03-29 00:50:39.582549 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-29 00:50:39.582555 | orchestrator | Sunday 29 March 2026 00:49:02 +0000 (0:00:04.349) 0:00:51.034 ********** 2026-03-29 00:50:39.582561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:50:39.582569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:50:39.582575 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:50:39.582580 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:50:39.582586 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:50:39.582592 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:50:39.582598 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-29 00:50:39.582603 | orchestrator | 2026-03-29 00:50:39.582612 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-29 00:50:39.582618 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:02.971) 0:00:54.006 ********** 2026-03-29 00:50:39.582624 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:50:39.582630 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:50:39.582636 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:50:39.582642 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:50:39.582647 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:50:39.582653 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:50:39.582659 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-29 00:50:39.582664 | orchestrator | 2026-03-29 00:50:39.582670 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-29 00:50:39.582676 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:03.550) 0:00:57.556 ********** 2026-03-29 00:50:39.582685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582692 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582713 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582749 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-29 00:50:39.582777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:50:39.582841 | orchestrator | 2026-03-29 00:50:39.582847 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-29 00:50:39.582853 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:04.181) 0:01:01.737 ********** 2026-03-29 00:50:39.582862 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:39.582868 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:39.582874 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:39.582883 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:39.582889 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:39.582895 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:39.582901 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:39.582907 | orchestrator | 2026-03-29 00:50:39.582912 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-29 00:50:39.582918 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:01.663) 0:01:03.401 ********** 2026-03-29 00:50:39.582924 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:39.582930 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:39.582936 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:39.582941 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:39.582947 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:39.582953 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:39.582958 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:39.582964 | orchestrator | 2026-03-29 00:50:39.582970 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:50:39.582976 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:01.270) 0:01:04.671 ********** 2026-03-29 00:50:39.582982 | orchestrator | 2026-03-29 00:50:39.582999 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:50:39.583005 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.076) 0:01:04.748 ********** 2026-03-29 00:50:39.583010 | orchestrator | 2026-03-29 00:50:39.583016 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:50:39.583022 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.061) 0:01:04.809 ********** 2026-03-29 00:50:39.583027 | orchestrator | 2026-03-29 00:50:39.583033 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:50:39.583039 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.289) 0:01:05.099 ********** 2026-03-29 00:50:39.583045 | orchestrator | 2026-03-29 00:50:39.583050 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:50:39.583056 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.131) 0:01:05.230 ********** 2026-03-29 00:50:39.583062 | orchestrator | 2026-03-29 00:50:39.583068 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:50:39.583073 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.078) 0:01:05.309 ********** 2026-03-29 00:50:39.583079 | orchestrator | 2026-03-29 00:50:39.583085 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-29 00:50:39.583091 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.076) 0:01:05.385 ********** 2026-03-29 00:50:39.583096 | orchestrator | 2026-03-29 00:50:39.583102 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-29 00:50:39.583108 | orchestrator | Sunday 29 March 2026 00:49:16 +0000 (0:00:00.130) 0:01:05.516 ********** 2026-03-29 00:50:39.583114 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:39.583120 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:39.583126 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:39.583132 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:39.583137 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:39.583143 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:39.583149 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:39.583154 | orchestrator | 2026-03-29 00:50:39.583160 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-29 00:50:39.583166 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:29.818) 0:01:35.334 ********** 2026-03-29 00:50:39.583172 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:39.583178 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:39.583184 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:39.583189 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:39.583195 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:39.583201 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:39.583211 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:39.583217 | orchestrator | 2026-03-29 00:50:39.583222 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-29 00:50:39.583228 | orchestrator | Sunday 29 March 2026 00:50:27 +0000 (0:00:40.416) 0:02:15.750 ********** 2026-03-29 00:50:39.583234 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:50:39.583241 | orchestrator | ok: [testbed-manager] 2026-03-29 00:50:39.583247 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:50:39.583252 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:50:39.583258 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:50:39.583264 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:50:39.583270 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:50:39.583275 | orchestrator | 2026-03-29 00:50:39.583281 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-29 00:50:39.583287 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:01.883) 0:02:17.634 ********** 2026-03-29 00:50:39.583293 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:50:39.583299 | orchestrator | changed: [testbed-manager] 2026-03-29 00:50:39.583305 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:50:39.583310 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:50:39.583316 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:50:39.583322 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:50:39.583328 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:50:39.583333 | orchestrator | 2026-03-29 00:50:39.583342 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:50:39.583349 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:50:39.583356 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:50:39.583362 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:50:39.583371 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:50:39.583377 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:50:39.583383 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:50:39.583389 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 00:50:39.583395 | orchestrator | 2026-03-29 00:50:39.583400 | orchestrator | 2026-03-29 00:50:39.583406 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:50:39.583412 | orchestrator | Sunday 29 March 2026 00:50:38 +0000 (0:00:09.580) 0:02:27.214 ********** 2026-03-29 00:50:39.583418 | orchestrator | =============================================================================== 2026-03-29 00:50:39.583424 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.42s 2026-03-29 00:50:39.583430 | orchestrator | common : Restart fluentd container ------------------------------------- 29.82s 2026-03-29 00:50:39.583435 | orchestrator | common : Restart cron container ----------------------------------------- 9.58s 2026-03-29 00:50:39.583441 | orchestrator | common : Copying over config.json files for services -------------------- 8.29s 2026-03-29 00:50:39.583447 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.42s 2026-03-29 00:50:39.583453 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.05s 2026-03-29 00:50:39.583458 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.35s 2026-03-29 00:50:39.583469 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.26s 2026-03-29 00:50:39.583474 | orchestrator | common : Check common containers ---------------------------------------- 4.18s 2026-03-29 00:50:39.583480 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.55s 2026-03-29 00:50:39.583486 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.35s 2026-03-29 00:50:39.583492 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.29s 2026-03-29 00:50:39.583497 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.97s 2026-03-29 00:50:39.583503 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.88s 2026-03-29 00:50:39.583509 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.17s 2026-03-29 00:50:39.583514 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.91s 2026-03-29 00:50:39.583520 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.88s 2026-03-29 00:50:39.583526 | orchestrator | common : include_tasks -------------------------------------------------- 1.68s 2026-03-29 00:50:39.583532 | orchestrator | common : Creating log volume -------------------------------------------- 1.66s 2026-03-29 00:50:39.583537 | orchestrator | common : include_tasks -------------------------------------------------- 1.40s 2026-03-29 00:50:39.583543 | orchestrator | 2026-03-29 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:42.606279 | orchestrator | 2026-03-29 00:50:42 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:50:42.610417 | orchestrator | 2026-03-29 00:50:42 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:42.610799 | orchestrator | 2026-03-29 00:50:42 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:42.611675 | orchestrator | 2026-03-29 00:50:42 | INFO  | Task 42233f97-78cb-4047-9e74-8a7383ecdf72 is in state STARTED 2026-03-29 00:50:42.611960 | orchestrator | 2026-03-29 00:50:42 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:50:42.612740 | orchestrator | 2026-03-29 00:50:42 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:50:42.612768 | orchestrator | 2026-03-29 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:45.637756 | orchestrator | 2026-03-29 00:50:45 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:50:45.638869 | orchestrator | 2026-03-29 00:50:45 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:45.639794 | orchestrator | 2026-03-29 00:50:45 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:45.640453 | orchestrator | 2026-03-29 00:50:45 | INFO  | Task 42233f97-78cb-4047-9e74-8a7383ecdf72 is in state STARTED 2026-03-29 00:50:45.641901 | orchestrator | 2026-03-29 00:50:45 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:50:45.642490 | orchestrator | 2026-03-29 00:50:45 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:50:45.642526 | orchestrator | 2026-03-29 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:48.672092 | orchestrator | 2026-03-29 00:50:48 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:50:48.672153 | orchestrator | 2026-03-29 00:50:48 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:48.672161 | orchestrator | 2026-03-29 00:50:48 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:48.672166 | orchestrator | 2026-03-29 00:50:48 | INFO  | Task 42233f97-78cb-4047-9e74-8a7383ecdf72 is in state STARTED 2026-03-29 00:50:48.672186 | orchestrator | 2026-03-29 00:50:48 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:50:48.672191 | orchestrator | 2026-03-29 00:50:48 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:50:48.672197 | orchestrator | 2026-03-29 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:51.701173 | orchestrator | 2026-03-29 00:50:51 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:50:51.701378 | orchestrator | 2026-03-29 00:50:51 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:51.702288 | orchestrator | 2026-03-29 00:50:51 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:51.702710 | orchestrator | 2026-03-29 00:50:51 | INFO  | Task 42233f97-78cb-4047-9e74-8a7383ecdf72 is in state STARTED 2026-03-29 00:50:51.703825 | orchestrator | 2026-03-29 00:50:51 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:50:51.705407 | orchestrator | 2026-03-29 00:50:51 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:50:51.705444 | orchestrator | 2026-03-29 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:54.789625 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:50:54.789715 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:50:54.789727 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:54.789733 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:54.789740 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 42233f97-78cb-4047-9e74-8a7383ecdf72 is in state SUCCESS 2026-03-29 00:50:54.789745 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:50:54.789752 | orchestrator | 2026-03-29 00:50:54 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:50:54.789758 | orchestrator | 2026-03-29 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:50:57.776539 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:50:57.776629 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:50:57.776641 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:50:57.776650 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:50:57.776659 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:50:57.776667 | orchestrator | 2026-03-29 00:50:57 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:50:57.776676 | orchestrator | 2026-03-29 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:00.821809 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:00.821902 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:00.821916 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:00.821950 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:00.821961 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:00.821972 | orchestrator | 2026-03-29 00:51:00 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:51:00.822008 | orchestrator | 2026-03-29 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:03.964264 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:03.966970 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:03.971931 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:03.975913 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:03.976056 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:03.976824 | orchestrator | 2026-03-29 00:51:03 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:51:03.976942 | orchestrator | 2026-03-29 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:07.082308 | orchestrator | 2026-03-29 00:51:07 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:07.082409 | orchestrator | 2026-03-29 00:51:07 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:07.084710 | orchestrator | 2026-03-29 00:51:07 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:07.084787 | orchestrator | 2026-03-29 00:51:07 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:07.085335 | orchestrator | 2026-03-29 00:51:07 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:07.085496 | orchestrator | 2026-03-29 00:51:07 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state STARTED 2026-03-29 00:51:07.085572 | orchestrator | 2026-03-29 00:51:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:10.120138 | orchestrator | 2026-03-29 00:51:10 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:10.122590 | orchestrator | 2026-03-29 00:51:10 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:10.123104 | orchestrator | 2026-03-29 00:51:10 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:10.125352 | orchestrator | 2026-03-29 00:51:10 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:10.126089 | orchestrator | 2026-03-29 00:51:10 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:10.127490 | orchestrator | 2026-03-29 00:51:10 | INFO  | Task 1106184a-a7b8-490a-a101-e7abb68d2357 is in state SUCCESS 2026-03-29 00:51:10.127624 | orchestrator | 2026-03-29 00:51:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:10.129343 | orchestrator | 2026-03-29 00:51:10.129401 | orchestrator | 2026-03-29 00:51:10.129415 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:51:10.129426 | orchestrator | 2026-03-29 00:51:10.129437 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:51:10.129448 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.194) 0:00:00.194 ********** 2026-03-29 00:51:10.129479 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:10.129488 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:10.129495 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:10.129501 | orchestrator | 2026-03-29 00:51:10.129507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:51:10.129513 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.373) 0:00:00.567 ********** 2026-03-29 00:51:10.129521 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-29 00:51:10.129528 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-29 00:51:10.129534 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-29 00:51:10.129540 | orchestrator | 2026-03-29 00:51:10.129546 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-29 00:51:10.129553 | orchestrator | 2026-03-29 00:51:10.129559 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-29 00:51:10.129566 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.654) 0:00:01.222 ********** 2026-03-29 00:51:10.129572 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:51:10.129580 | orchestrator | 2026-03-29 00:51:10.129586 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-29 00:51:10.129593 | orchestrator | Sunday 29 March 2026 00:50:45 +0000 (0:00:00.683) 0:00:01.905 ********** 2026-03-29 00:51:10.129599 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-29 00:51:10.129606 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-29 00:51:10.129612 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-29 00:51:10.129618 | orchestrator | 2026-03-29 00:51:10.129624 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-29 00:51:10.129630 | orchestrator | Sunday 29 March 2026 00:50:45 +0000 (0:00:00.861) 0:00:02.767 ********** 2026-03-29 00:51:10.129637 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-29 00:51:10.129643 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-29 00:51:10.129649 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-29 00:51:10.129656 | orchestrator | 2026-03-29 00:51:10.129662 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-29 00:51:10.129679 | orchestrator | Sunday 29 March 2026 00:50:48 +0000 (0:00:02.516) 0:00:05.284 ********** 2026-03-29 00:51:10.129685 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:10.129692 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:10.129698 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:10.129704 | orchestrator | 2026-03-29 00:51:10.129710 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-29 00:51:10.129718 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:01.845) 0:00:07.129 ********** 2026-03-29 00:51:10.129728 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:10.129740 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:10.129754 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:10.129764 | orchestrator | 2026-03-29 00:51:10.129773 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:51:10.129784 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:10.129795 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:10.129804 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:10.129814 | orchestrator | 2026-03-29 00:51:10.129823 | orchestrator | 2026-03-29 00:51:10.129832 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:51:10.129842 | orchestrator | Sunday 29 March 2026 00:50:53 +0000 (0:00:02.970) 0:00:10.099 ********** 2026-03-29 00:51:10.129860 | orchestrator | =============================================================================== 2026-03-29 00:51:10.129870 | orchestrator | memcached : Restart memcached container --------------------------------- 2.97s 2026-03-29 00:51:10.129879 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.52s 2026-03-29 00:51:10.129889 | orchestrator | memcached : Check memcached container ----------------------------------- 1.85s 2026-03-29 00:51:10.129899 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.86s 2026-03-29 00:51:10.129909 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.68s 2026-03-29 00:51:10.129918 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-03-29 00:51:10.129929 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-03-29 00:51:10.129941 | orchestrator | 2026-03-29 00:51:10.129951 | orchestrator | 2026-03-29 00:51:10.129963 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:51:10.129972 | orchestrator | 2026-03-29 00:51:10.130075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:51:10.130087 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.317) 0:00:00.317 ********** 2026-03-29 00:51:10.130093 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:10.130100 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:10.130106 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:10.130112 | orchestrator | 2026-03-29 00:51:10.130119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:51:10.130139 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.250) 0:00:00.568 ********** 2026-03-29 00:51:10.130146 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-29 00:51:10.130152 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-29 00:51:10.130158 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-29 00:51:10.130164 | orchestrator | 2026-03-29 00:51:10.130170 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-29 00:51:10.130177 | orchestrator | 2026-03-29 00:51:10.130183 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-29 00:51:10.130189 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.526) 0:00:01.094 ********** 2026-03-29 00:51:10.130195 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:51:10.130202 | orchestrator | 2026-03-29 00:51:10.130209 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-29 00:51:10.130215 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.560) 0:00:01.655 ********** 2026-03-29 00:51:10.130229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130289 | orchestrator | 2026-03-29 00:51:10.130295 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-29 00:51:10.130301 | orchestrator | Sunday 29 March 2026 00:50:45 +0000 (0:00:01.373) 0:00:03.028 ********** 2026-03-29 00:51:10.130312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130384 | orchestrator | 2026-03-29 00:51:10.130394 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-29 00:51:10.130405 | orchestrator | Sunday 29 March 2026 00:50:48 +0000 (0:00:02.910) 0:00:05.938 ********** 2026-03-29 00:51:10.130422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130484 | orchestrator | 2026-03-29 00:51:10.130491 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-29 00:51:10.130497 | orchestrator | Sunday 29 March 2026 00:50:51 +0000 (0:00:03.094) 0:00:09.033 ********** 2026-03-29 00:51:10.130506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-29 00:51:10.130555 | orchestrator | 2026-03-29 00:51:10.130561 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 00:51:10.130567 | orchestrator | Sunday 29 March 2026 00:50:53 +0000 (0:00:01.829) 0:00:10.863 ********** 2026-03-29 00:51:10.130574 | orchestrator | 2026-03-29 00:51:10.130580 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 00:51:10.130586 | orchestrator | Sunday 29 March 2026 00:50:53 +0000 (0:00:00.077) 0:00:10.941 ********** 2026-03-29 00:51:10.130592 | orchestrator | 2026-03-29 00:51:10.130599 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-29 00:51:10.130605 | orchestrator | Sunday 29 March 2026 00:50:53 +0000 (0:00:00.142) 0:00:11.083 ********** 2026-03-29 00:51:10.130611 | orchestrator | 2026-03-29 00:51:10.130617 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-29 00:51:10.130623 | orchestrator | Sunday 29 March 2026 00:50:54 +0000 (0:00:00.171) 0:00:11.254 ********** 2026-03-29 00:51:10.130634 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:10.130640 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:10.130649 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:10.130656 | orchestrator | 2026-03-29 00:51:10.130662 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-29 00:51:10.130668 | orchestrator | Sunday 29 March 2026 00:51:02 +0000 (0:00:08.888) 0:00:20.143 ********** 2026-03-29 00:51:10.130674 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:10.130680 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:10.130686 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:10.130692 | orchestrator | 2026-03-29 00:51:10.130699 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:51:10.130705 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:10.130712 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:10.130718 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:51:10.130724 | orchestrator | 2026-03-29 00:51:10.130731 | orchestrator | 2026-03-29 00:51:10.130737 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:51:10.130743 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:04.060) 0:00:24.203 ********** 2026-03-29 00:51:10.130749 | orchestrator | =============================================================================== 2026-03-29 00:51:10.130755 | orchestrator | redis : Restart redis container ----------------------------------------- 8.89s 2026-03-29 00:51:10.130762 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.06s 2026-03-29 00:51:10.130768 | orchestrator | redis : Copying over redis config files --------------------------------- 3.09s 2026-03-29 00:51:10.130774 | orchestrator | redis : Copying over default config.json files -------------------------- 2.91s 2026-03-29 00:51:10.130780 | orchestrator | redis : Check redis containers ------------------------------------------ 1.83s 2026-03-29 00:51:10.130786 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.37s 2026-03-29 00:51:10.130793 | orchestrator | redis : include_tasks --------------------------------------------------- 0.56s 2026-03-29 00:51:10.130799 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-03-29 00:51:10.130805 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.39s 2026-03-29 00:51:10.130811 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2026-03-29 00:51:13.158921 | orchestrator | 2026-03-29 00:51:13 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:13.159072 | orchestrator | 2026-03-29 00:51:13 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:13.159088 | orchestrator | 2026-03-29 00:51:13 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:13.159101 | orchestrator | 2026-03-29 00:51:13 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:13.160149 | orchestrator | 2026-03-29 00:51:13 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:13.160217 | orchestrator | 2026-03-29 00:51:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:16.330427 | orchestrator | 2026-03-29 00:51:16 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:16.330513 | orchestrator | 2026-03-29 00:51:16 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:16.330521 | orchestrator | 2026-03-29 00:51:16 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:16.330554 | orchestrator | 2026-03-29 00:51:16 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:16.330561 | orchestrator | 2026-03-29 00:51:16 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:16.330568 | orchestrator | 2026-03-29 00:51:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:19.306757 | orchestrator | 2026-03-29 00:51:19 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:19.306907 | orchestrator | 2026-03-29 00:51:19 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:19.307188 | orchestrator | 2026-03-29 00:51:19 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:19.309827 | orchestrator | 2026-03-29 00:51:19 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:19.310376 | orchestrator | 2026-03-29 00:51:19 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:19.310415 | orchestrator | 2026-03-29 00:51:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:22.344096 | orchestrator | 2026-03-29 00:51:22 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:22.346895 | orchestrator | 2026-03-29 00:51:22 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:22.347141 | orchestrator | 2026-03-29 00:51:22 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:22.347744 | orchestrator | 2026-03-29 00:51:22 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:22.348216 | orchestrator | 2026-03-29 00:51:22 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:22.348239 | orchestrator | 2026-03-29 00:51:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:25.375933 | orchestrator | 2026-03-29 00:51:25 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:25.376913 | orchestrator | 2026-03-29 00:51:25 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:25.378310 | orchestrator | 2026-03-29 00:51:25 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:25.379485 | orchestrator | 2026-03-29 00:51:25 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:25.380591 | orchestrator | 2026-03-29 00:51:25 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:25.380648 | orchestrator | 2026-03-29 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:28.413290 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:28.415092 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:28.415818 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:28.415909 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:28.416780 | orchestrator | 2026-03-29 00:51:28 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:28.417090 | orchestrator | 2026-03-29 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:31.452369 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:31.453477 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:31.454715 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:31.455851 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:31.456807 | orchestrator | 2026-03-29 00:51:31 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:31.456836 | orchestrator | 2026-03-29 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:34.496408 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:34.496456 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:34.498366 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:34.498917 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:34.499707 | orchestrator | 2026-03-29 00:51:34 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:34.500064 | orchestrator | 2026-03-29 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:37.536691 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:37.538734 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:37.539436 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:37.540302 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:37.542154 | orchestrator | 2026-03-29 00:51:37 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:37.544421 | orchestrator | 2026-03-29 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:40.581214 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:40.581599 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:40.582059 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:40.582611 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:40.583195 | orchestrator | 2026-03-29 00:51:40 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:40.583205 | orchestrator | 2026-03-29 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:43.613523 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:43.614870 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:43.616179 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:43.617733 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:43.618580 | orchestrator | 2026-03-29 00:51:43 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state STARTED 2026-03-29 00:51:43.618640 | orchestrator | 2026-03-29 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:46.707103 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:46.707133 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:46.707140 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:51:46.707146 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:46.707151 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:46.707157 | orchestrator | 2026-03-29 00:51:46 | INFO  | Task 3e27d91e-b1f5-41c1-98fc-793ecac79d40 is in state SUCCESS 2026-03-29 00:51:46.707163 | orchestrator | 2026-03-29 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:46.707517 | orchestrator | 2026-03-29 00:51:46.707535 | orchestrator | 2026-03-29 00:51:46.707541 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:51:46.707547 | orchestrator | 2026-03-29 00:51:46.707553 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:51:46.707559 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.244) 0:00:00.244 ********** 2026-03-29 00:51:46.707565 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:46.707572 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:46.707577 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:46.707583 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:46.707589 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:46.707594 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:46.707657 | orchestrator | 2026-03-29 00:51:46.707666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:51:46.707671 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.791) 0:00:01.035 ********** 2026-03-29 00:51:46.707677 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:51:46.707683 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:51:46.707689 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:51:46.707695 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:51:46.707700 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:51:46.707706 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-29 00:51:46.707711 | orchestrator | 2026-03-29 00:51:46.707717 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-29 00:51:46.707722 | orchestrator | 2026-03-29 00:51:46.707728 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-29 00:51:46.707733 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.902) 0:00:01.938 ********** 2026-03-29 00:51:46.707741 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:51:46.707748 | orchestrator | 2026-03-29 00:51:46.707753 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 00:51:46.707759 | orchestrator | Sunday 29 March 2026 00:50:46 +0000 (0:00:01.407) 0:00:03.345 ********** 2026-03-29 00:51:46.707765 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-29 00:51:46.707771 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-29 00:51:46.707777 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-29 00:51:46.707782 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-29 00:51:46.707807 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-29 00:51:46.707813 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-29 00:51:46.707818 | orchestrator | 2026-03-29 00:51:46.707824 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 00:51:46.707830 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:01.624) 0:00:04.969 ********** 2026-03-29 00:51:46.707835 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-29 00:51:46.707841 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-29 00:51:46.707847 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-29 00:51:46.707852 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-29 00:51:46.707858 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-29 00:51:46.707864 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-29 00:51:46.707869 | orchestrator | 2026-03-29 00:51:46.707875 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 00:51:46.707880 | orchestrator | Sunday 29 March 2026 00:50:49 +0000 (0:00:01.478) 0:00:06.448 ********** 2026-03-29 00:51:46.707886 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-29 00:51:46.707892 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:46.707898 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-29 00:51:46.707904 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-29 00:51:46.707910 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:46.707916 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-29 00:51:46.707921 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:46.707927 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-29 00:51:46.707933 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:46.707938 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:46.707944 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-29 00:51:46.707977 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:46.707982 | orchestrator | 2026-03-29 00:51:46.707988 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-29 00:51:46.707993 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:01.592) 0:00:08.040 ********** 2026-03-29 00:51:46.707999 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:46.708004 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:46.708009 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:46.708015 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:46.708020 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:46.708025 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:46.708031 | orchestrator | 2026-03-29 00:51:46.708036 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-29 00:51:46.708042 | orchestrator | Sunday 29 March 2026 00:50:51 +0000 (0:00:00.663) 0:00:08.704 ********** 2026-03-29 00:51:46.708056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708156 | orchestrator | 2026-03-29 00:51:46.708162 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-29 00:51:46.708168 | orchestrator | Sunday 29 March 2026 00:50:53 +0000 (0:00:01.821) 0:00:10.525 ********** 2026-03-29 00:51:46.708178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708198 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708204 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708270 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708276 | orchestrator | 2026-03-29 00:51:46.708281 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-29 00:51:46.708287 | orchestrator | Sunday 29 March 2026 00:50:56 +0000 (0:00:03.286) 0:00:13.812 ********** 2026-03-29 00:51:46.708292 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:51:46.708298 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:51:46.708304 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:51:46.708310 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:46.708316 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:46.708323 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:46.708329 | orchestrator | 2026-03-29 00:51:46.708335 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-29 00:51:46.708342 | orchestrator | Sunday 29 March 2026 00:50:57 +0000 (0:00:01.100) 0:00:14.912 ********** 2026-03-29 00:51:46.708357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-29 00:51:46.708513 | orchestrator | 2026-03-29 00:51:46.708523 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:51:46.708532 | orchestrator | Sunday 29 March 2026 00:51:00 +0000 (0:00:02.646) 0:00:17.559 ********** 2026-03-29 00:51:46.708542 | orchestrator | 2026-03-29 00:51:46.708555 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:51:46.708561 | orchestrator | Sunday 29 March 2026 00:51:00 +0000 (0:00:00.305) 0:00:17.864 ********** 2026-03-29 00:51:46.708567 | orchestrator | 2026-03-29 00:51:46.708572 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:51:46.708577 | orchestrator | Sunday 29 March 2026 00:51:00 +0000 (0:00:00.165) 0:00:18.030 ********** 2026-03-29 00:51:46.708583 | orchestrator | 2026-03-29 00:51:46.708588 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:51:46.708593 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:00.136) 0:00:18.166 ********** 2026-03-29 00:51:46.708599 | orchestrator | 2026-03-29 00:51:46.708604 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:51:46.708609 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:00.189) 0:00:18.356 ********** 2026-03-29 00:51:46.708615 | orchestrator | 2026-03-29 00:51:46.708620 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-29 00:51:46.708625 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:00.128) 0:00:18.484 ********** 2026-03-29 00:51:46.708631 | orchestrator | 2026-03-29 00:51:46.708636 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-29 00:51:46.708642 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:00.143) 0:00:18.627 ********** 2026-03-29 00:51:46.708652 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:46.708657 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:46.708662 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:46.708668 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:46.708673 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:46.708678 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:46.708684 | orchestrator | 2026-03-29 00:51:46.708689 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-29 00:51:46.708695 | orchestrator | Sunday 29 March 2026 00:51:11 +0000 (0:00:09.824) 0:00:28.451 ********** 2026-03-29 00:51:46.708700 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:51:46.708706 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:51:46.708711 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:51:46.708716 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:51:46.708722 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:51:46.708727 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:51:46.708732 | orchestrator | 2026-03-29 00:51:46.708737 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-29 00:51:46.708743 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:01.682) 0:00:30.134 ********** 2026-03-29 00:51:46.708748 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:46.708754 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:46.708759 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:46.708764 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:46.708770 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:46.708775 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:46.708780 | orchestrator | 2026-03-29 00:51:46.708786 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-29 00:51:46.708791 | orchestrator | Sunday 29 March 2026 00:51:22 +0000 (0:00:09.701) 0:00:39.835 ********** 2026-03-29 00:51:46.708800 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-29 00:51:46.708806 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-29 00:51:46.708812 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-29 00:51:46.708817 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-29 00:51:46.708823 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-29 00:51:46.708828 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-29 00:51:46.708834 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-29 00:51:46.708839 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-29 00:51:46.708844 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-29 00:51:46.708850 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-29 00:51:46.708855 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-29 00:51:46.708861 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:51:46.708866 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-29 00:51:46.708872 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:51:46.708877 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:51:46.708887 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:51:46.708896 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:51:46.708901 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-29 00:51:46.708907 | orchestrator | 2026-03-29 00:51:46.708912 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-29 00:51:46.708918 | orchestrator | Sunday 29 March 2026 00:51:30 +0000 (0:00:07.836) 0:00:47.671 ********** 2026-03-29 00:51:46.708923 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-29 00:51:46.708929 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:46.708934 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-29 00:51:46.708940 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:46.709020 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-29 00:51:46.709029 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:46.709034 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-29 00:51:46.709040 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-29 00:51:46.709046 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-29 00:51:46.709051 | orchestrator | 2026-03-29 00:51:46.709056 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-29 00:51:46.709062 | orchestrator | Sunday 29 March 2026 00:51:32 +0000 (0:00:02.092) 0:00:49.764 ********** 2026-03-29 00:51:46.709067 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-29 00:51:46.709073 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:51:46.709078 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-29 00:51:46.709083 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:51:46.709089 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-29 00:51:46.709094 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:51:46.709099 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-29 00:51:46.709106 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-29 00:51:46.709114 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-29 00:51:46.709125 | orchestrator | 2026-03-29 00:51:46.709134 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-29 00:51:46.709141 | orchestrator | Sunday 29 March 2026 00:51:35 +0000 (0:00:03.323) 0:00:53.087 ********** 2026-03-29 00:51:46.709149 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:51:46.709156 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:51:46.709163 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:51:46.709171 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:51:46.709177 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:51:46.709184 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:51:46.709192 | orchestrator | 2026-03-29 00:51:46.709200 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:51:46.709208 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:51:46.709223 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:51:46.709231 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:51:46.709240 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:51:46.709257 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:51:46.709262 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:51:46.709266 | orchestrator | 2026-03-29 00:51:46.709271 | orchestrator | 2026-03-29 00:51:46.709276 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:51:46.709281 | orchestrator | Sunday 29 March 2026 00:51:45 +0000 (0:00:09.193) 0:01:02.281 ********** 2026-03-29 00:51:46.709286 | orchestrator | =============================================================================== 2026-03-29 00:51:46.709290 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.90s 2026-03-29 00:51:46.709295 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.82s 2026-03-29 00:51:46.709300 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.84s 2026-03-29 00:51:46.709305 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.32s 2026-03-29 00:51:46.709309 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.29s 2026-03-29 00:51:46.709314 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.65s 2026-03-29 00:51:46.709319 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.09s 2026-03-29 00:51:46.709324 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.82s 2026-03-29 00:51:46.709328 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.68s 2026-03-29 00:51:46.709333 | orchestrator | module-load : Load modules ---------------------------------------------- 1.62s 2026-03-29 00:51:46.709338 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.59s 2026-03-29 00:51:46.709342 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.48s 2026-03-29 00:51:46.709351 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.41s 2026-03-29 00:51:46.709356 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.10s 2026-03-29 00:51:46.709361 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.07s 2026-03-29 00:51:46.709365 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2026-03-29 00:51:46.709370 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2026-03-29 00:51:46.709375 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.66s 2026-03-29 00:51:49.701141 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:49.704634 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:49.705702 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:51:49.708775 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:49.709359 | orchestrator | 2026-03-29 00:51:49 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:49.709415 | orchestrator | 2026-03-29 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:52.743678 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:52.745639 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:52.747350 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:51:52.749552 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:52.751740 | orchestrator | 2026-03-29 00:51:52 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:52.751842 | orchestrator | 2026-03-29 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:55.791565 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:55.791748 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:55.792255 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:51:55.793201 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:55.794102 | orchestrator | 2026-03-29 00:51:55 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:55.794130 | orchestrator | 2026-03-29 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:51:58.847027 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:51:58.847125 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:51:58.847134 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:51:58.847141 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:51:58.847148 | orchestrator | 2026-03-29 00:51:58 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:51:58.847154 | orchestrator | 2026-03-29 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:01.896327 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:01.896398 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:01.896404 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:01.896409 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:01.896413 | orchestrator | 2026-03-29 00:52:01 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:01.896417 | orchestrator | 2026-03-29 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:04.893346 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:04.893403 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:04.893539 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:04.894425 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:04.895457 | orchestrator | 2026-03-29 00:52:04 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:04.895492 | orchestrator | 2026-03-29 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:07.936437 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:07.936668 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:07.937710 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:07.941888 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:07.942647 | orchestrator | 2026-03-29 00:52:07 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:07.942690 | orchestrator | 2026-03-29 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:10.984980 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:10.990243 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:10.991947 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:10.992776 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:10.993638 | orchestrator | 2026-03-29 00:52:10 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:10.993682 | orchestrator | 2026-03-29 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:14.086686 | orchestrator | 2026-03-29 00:52:14 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:14.086766 | orchestrator | 2026-03-29 00:52:14 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:14.086775 | orchestrator | 2026-03-29 00:52:14 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:14.086779 | orchestrator | 2026-03-29 00:52:14 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:14.086783 | orchestrator | 2026-03-29 00:52:14 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:14.086788 | orchestrator | 2026-03-29 00:52:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:17.080201 | orchestrator | 2026-03-29 00:52:17 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:17.080749 | orchestrator | 2026-03-29 00:52:17 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:17.081823 | orchestrator | 2026-03-29 00:52:17 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:17.082805 | orchestrator | 2026-03-29 00:52:17 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:17.083752 | orchestrator | 2026-03-29 00:52:17 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:17.083784 | orchestrator | 2026-03-29 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:20.180108 | orchestrator | 2026-03-29 00:52:20 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:20.180186 | orchestrator | 2026-03-29 00:52:20 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:20.180195 | orchestrator | 2026-03-29 00:52:20 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:20.180203 | orchestrator | 2026-03-29 00:52:20 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:20.180209 | orchestrator | 2026-03-29 00:52:20 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:20.180216 | orchestrator | 2026-03-29 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:23.222730 | orchestrator | 2026-03-29 00:52:23 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:23.222817 | orchestrator | 2026-03-29 00:52:23 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:23.222826 | orchestrator | 2026-03-29 00:52:23 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:23.222833 | orchestrator | 2026-03-29 00:52:23 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:23.227213 | orchestrator | 2026-03-29 00:52:23 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:23.227295 | orchestrator | 2026-03-29 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:26.311438 | orchestrator | 2026-03-29 00:52:26 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:26.312052 | orchestrator | 2026-03-29 00:52:26 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:26.314186 | orchestrator | 2026-03-29 00:52:26 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:26.314227 | orchestrator | 2026-03-29 00:52:26 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:26.314237 | orchestrator | 2026-03-29 00:52:26 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:26.316130 | orchestrator | 2026-03-29 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:29.389937 | orchestrator | 2026-03-29 00:52:29 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:29.391864 | orchestrator | 2026-03-29 00:52:29 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:29.393742 | orchestrator | 2026-03-29 00:52:29 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:29.394586 | orchestrator | 2026-03-29 00:52:29 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:29.396208 | orchestrator | 2026-03-29 00:52:29 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state STARTED 2026-03-29 00:52:29.397201 | orchestrator | 2026-03-29 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:32.430206 | orchestrator | 2026-03-29 00:52:32 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:32.430838 | orchestrator | 2026-03-29 00:52:32 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:32.432299 | orchestrator | 2026-03-29 00:52:32 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:32.433171 | orchestrator | 2026-03-29 00:52:32 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:32.434962 | orchestrator | 2026-03-29 00:52:32 | INFO  | Task 91c4ce41-fc32-4c62-b953-dd8348432458 is in state SUCCESS 2026-03-29 00:52:32.437061 | orchestrator | 2026-03-29 00:52:32.437106 | orchestrator | 2026-03-29 00:52:32.437113 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-29 00:52:32.437119 | orchestrator | 2026-03-29 00:52:32.437123 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-29 00:52:32.437129 | orchestrator | Sunday 29 March 2026 00:48:12 +0000 (0:00:00.215) 0:00:00.215 ********** 2026-03-29 00:52:32.437133 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:32.437138 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:32.437143 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:32.437147 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.437151 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.437168 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.437172 | orchestrator | 2026-03-29 00:52:32.437176 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-29 00:52:32.437180 | orchestrator | Sunday 29 March 2026 00:48:12 +0000 (0:00:00.698) 0:00:00.914 ********** 2026-03-29 00:52:32.437184 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437188 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437192 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437196 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437200 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437204 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437208 | orchestrator | 2026-03-29 00:52:32.437212 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-29 00:52:32.437216 | orchestrator | Sunday 29 March 2026 00:48:13 +0000 (0:00:00.655) 0:00:01.570 ********** 2026-03-29 00:52:32.437219 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437223 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437227 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437231 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437234 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437238 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437242 | orchestrator | 2026-03-29 00:52:32.437246 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-29 00:52:32.437249 | orchestrator | Sunday 29 March 2026 00:48:14 +0000 (0:00:00.753) 0:00:02.323 ********** 2026-03-29 00:52:32.437253 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.437257 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.437267 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.437270 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.437274 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.437278 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.437282 | orchestrator | 2026-03-29 00:52:32.437286 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-29 00:52:32.437289 | orchestrator | Sunday 29 March 2026 00:48:16 +0000 (0:00:02.020) 0:00:04.343 ********** 2026-03-29 00:52:32.437293 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.437298 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.437304 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.437309 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.437316 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.437322 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.437328 | orchestrator | 2026-03-29 00:52:32.437334 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-29 00:52:32.437339 | orchestrator | Sunday 29 March 2026 00:48:17 +0000 (0:00:00.941) 0:00:05.285 ********** 2026-03-29 00:52:32.437345 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.437351 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.437357 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.437363 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.437369 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.437376 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.437381 | orchestrator | 2026-03-29 00:52:32.437387 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-29 00:52:32.437393 | orchestrator | Sunday 29 March 2026 00:48:18 +0000 (0:00:01.029) 0:00:06.314 ********** 2026-03-29 00:52:32.437399 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437403 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437407 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437411 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437415 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437418 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437422 | orchestrator | 2026-03-29 00:52:32.437426 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-29 00:52:32.437437 | orchestrator | Sunday 29 March 2026 00:48:19 +0000 (0:00:00.747) 0:00:07.061 ********** 2026-03-29 00:52:32.437440 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437444 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437448 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437452 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437456 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437459 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437463 | orchestrator | 2026-03-29 00:52:32.437467 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-29 00:52:32.437471 | orchestrator | Sunday 29 March 2026 00:48:19 +0000 (0:00:00.721) 0:00:07.783 ********** 2026-03-29 00:52:32.437475 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:52:32.437479 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:52:32.437483 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:52:32.437487 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:52:32.437490 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437494 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:52:32.437498 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:52:32.437502 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437506 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:52:32.437510 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:52:32.437522 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437526 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:52:32.437530 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:52:32.437534 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437537 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437541 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 00:52:32.437545 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 00:52:32.437549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437553 | orchestrator | 2026-03-29 00:52:32.437556 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-29 00:52:32.437560 | orchestrator | Sunday 29 March 2026 00:48:20 +0000 (0:00:00.747) 0:00:08.531 ********** 2026-03-29 00:52:32.437564 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437568 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437571 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437575 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437579 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437583 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437587 | orchestrator | 2026-03-29 00:52:32.437590 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-29 00:52:32.437596 | orchestrator | Sunday 29 March 2026 00:48:22 +0000 (0:00:02.178) 0:00:10.710 ********** 2026-03-29 00:52:32.437599 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:32.437603 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:32.437607 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:32.437611 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.437614 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.437618 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.437622 | orchestrator | 2026-03-29 00:52:32.437626 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-29 00:52:32.437629 | orchestrator | Sunday 29 March 2026 00:48:24 +0000 (0:00:01.327) 0:00:12.038 ********** 2026-03-29 00:52:32.437637 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.437641 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.437645 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.437649 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.437653 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.437658 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.437662 | orchestrator | 2026-03-29 00:52:32.437666 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-29 00:52:32.437671 | orchestrator | Sunday 29 March 2026 00:48:30 +0000 (0:00:06.202) 0:00:18.240 ********** 2026-03-29 00:52:32.437675 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437680 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437684 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437688 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437693 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437698 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437702 | orchestrator | 2026-03-29 00:52:32.437706 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-29 00:52:32.437711 | orchestrator | Sunday 29 March 2026 00:48:32 +0000 (0:00:01.720) 0:00:19.960 ********** 2026-03-29 00:52:32.437715 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437719 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437724 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437728 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437732 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437736 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437741 | orchestrator | 2026-03-29 00:52:32.437745 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-29 00:52:32.437751 | orchestrator | Sunday 29 March 2026 00:48:33 +0000 (0:00:01.924) 0:00:21.885 ********** 2026-03-29 00:52:32.437755 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437760 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.437764 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.437768 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.437772 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.437777 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.437781 | orchestrator | 2026-03-29 00:52:32.437785 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-29 00:52:32.437790 | orchestrator | Sunday 29 March 2026 00:48:34 +0000 (0:00:00.767) 0:00:22.653 ********** 2026-03-29 00:52:32.437794 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-29 00:52:32.437799 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-29 00:52:32.437804 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-29 00:52:32.437808 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-29 00:52:32.437812 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.437817 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-29 00:52:32.438256 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-29 00:52:32.438273 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.438278 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-29 00:52:32.438283 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-29 00:52:32.438287 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.438291 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-29 00:52:32.438295 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-29 00:52:32.438298 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.438302 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438306 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-29 00:52:32.438310 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-29 00:52:32.438314 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438324 | orchestrator | 2026-03-29 00:52:32.438328 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-29 00:52:32.438339 | orchestrator | Sunday 29 March 2026 00:48:36 +0000 (0:00:01.528) 0:00:24.181 ********** 2026-03-29 00:52:32.438343 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.438347 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.438351 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.438355 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.438359 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438362 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438366 | orchestrator | 2026-03-29 00:52:32.438370 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-29 00:52:32.438374 | orchestrator | Sunday 29 March 2026 00:48:37 +0000 (0:00:01.193) 0:00:25.375 ********** 2026-03-29 00:52:32.438378 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.438382 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.438386 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.438389 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.438393 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438397 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438401 | orchestrator | 2026-03-29 00:52:32.438404 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-29 00:52:32.438408 | orchestrator | 2026-03-29 00:52:32.438412 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-29 00:52:32.438416 | orchestrator | Sunday 29 March 2026 00:48:39 +0000 (0:00:02.346) 0:00:27.722 ********** 2026-03-29 00:52:32.438419 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438423 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438427 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438431 | orchestrator | 2026-03-29 00:52:32.438435 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-29 00:52:32.438439 | orchestrator | Sunday 29 March 2026 00:48:41 +0000 (0:00:01.749) 0:00:29.472 ********** 2026-03-29 00:52:32.438442 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438446 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438450 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438453 | orchestrator | 2026-03-29 00:52:32.438457 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-29 00:52:32.438461 | orchestrator | Sunday 29 March 2026 00:48:42 +0000 (0:00:01.174) 0:00:30.646 ********** 2026-03-29 00:52:32.438465 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438468 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438472 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438476 | orchestrator | 2026-03-29 00:52:32.438480 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-29 00:52:32.438483 | orchestrator | Sunday 29 March 2026 00:48:43 +0000 (0:00:01.055) 0:00:31.702 ********** 2026-03-29 00:52:32.438487 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438491 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438494 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438498 | orchestrator | 2026-03-29 00:52:32.438502 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-29 00:52:32.438506 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.880) 0:00:32.583 ********** 2026-03-29 00:52:32.438509 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.438513 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438517 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438521 | orchestrator | 2026-03-29 00:52:32.438525 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-29 00:52:32.438528 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.270) 0:00:32.853 ********** 2026-03-29 00:52:32.438532 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.438536 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.438540 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438547 | orchestrator | 2026-03-29 00:52:32.438551 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-29 00:52:32.438554 | orchestrator | Sunday 29 March 2026 00:48:46 +0000 (0:00:01.255) 0:00:34.108 ********** 2026-03-29 00:52:32.438558 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.438562 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438566 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.438569 | orchestrator | 2026-03-29 00:52:32.438573 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-29 00:52:32.438577 | orchestrator | Sunday 29 March 2026 00:48:47 +0000 (0:00:01.450) 0:00:35.559 ********** 2026-03-29 00:52:32.438581 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:52:32.438584 | orchestrator | 2026-03-29 00:52:32.438588 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-29 00:52:32.438592 | orchestrator | Sunday 29 March 2026 00:48:48 +0000 (0:00:00.628) 0:00:36.188 ********** 2026-03-29 00:52:32.438596 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438600 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438606 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438610 | orchestrator | 2026-03-29 00:52:32.438614 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-29 00:52:32.438617 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:02.790) 0:00:38.978 ********** 2026-03-29 00:52:32.438621 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438625 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438629 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438633 | orchestrator | 2026-03-29 00:52:32.438636 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-29 00:52:32.438640 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:00.882) 0:00:39.860 ********** 2026-03-29 00:52:32.438644 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438648 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438651 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438655 | orchestrator | 2026-03-29 00:52:32.438659 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-29 00:52:32.438663 | orchestrator | Sunday 29 March 2026 00:48:53 +0000 (0:00:01.268) 0:00:41.128 ********** 2026-03-29 00:52:32.438667 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438671 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438674 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438678 | orchestrator | 2026-03-29 00:52:32.438682 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-29 00:52:32.438688 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:01.517) 0:00:42.646 ********** 2026-03-29 00:52:32.438692 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.438696 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438700 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438704 | orchestrator | 2026-03-29 00:52:32.438708 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-29 00:52:32.438712 | orchestrator | Sunday 29 March 2026 00:48:55 +0000 (0:00:00.779) 0:00:43.425 ********** 2026-03-29 00:52:32.438715 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.438719 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438723 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438727 | orchestrator | 2026-03-29 00:52:32.438731 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-29 00:52:32.438734 | orchestrator | Sunday 29 March 2026 00:48:56 +0000 (0:00:00.817) 0:00:44.244 ********** 2026-03-29 00:52:32.438738 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438742 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.438746 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.438749 | orchestrator | 2026-03-29 00:52:32.438753 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-29 00:52:32.438761 | orchestrator | Sunday 29 March 2026 00:48:58 +0000 (0:00:01.855) 0:00:46.099 ********** 2026-03-29 00:52:32.438764 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438768 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438772 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438776 | orchestrator | 2026-03-29 00:52:32.438780 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-29 00:52:32.438784 | orchestrator | Sunday 29 March 2026 00:49:00 +0000 (0:00:02.322) 0:00:48.422 ********** 2026-03-29 00:52:32.438787 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438791 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438795 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438799 | orchestrator | 2026-03-29 00:52:32.438803 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-29 00:52:32.438806 | orchestrator | Sunday 29 March 2026 00:49:01 +0000 (0:00:01.107) 0:00:49.529 ********** 2026-03-29 00:52:32.438810 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 00:52:32.438815 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 00:52:32.438819 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-29 00:52:32.438823 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 00:52:32.438826 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 00:52:32.438830 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-29 00:52:32.438834 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 00:52:32.438838 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 00:52:32.438842 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-29 00:52:32.438846 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 00:52:32.438850 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 00:52:32.438853 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-29 00:52:32.438859 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.438864 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.438867 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.438871 | orchestrator | 2026-03-29 00:52:32.438875 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-29 00:52:32.438879 | orchestrator | Sunday 29 March 2026 00:49:44 +0000 (0:00:43.096) 0:01:32.625 ********** 2026-03-29 00:52:32.438883 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.438887 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.438890 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.438920 | orchestrator | 2026-03-29 00:52:32.438927 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-29 00:52:32.438935 | orchestrator | Sunday 29 March 2026 00:49:45 +0000 (0:00:00.479) 0:01:33.105 ********** 2026-03-29 00:52:32.438942 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438950 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.438959 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.438965 | orchestrator | 2026-03-29 00:52:32.438970 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-29 00:52:32.438976 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:01.128) 0:01:34.233 ********** 2026-03-29 00:52:32.438983 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.438989 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.438994 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.439000 | orchestrator | 2026-03-29 00:52:32.439009 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-29 00:52:32.439015 | orchestrator | Sunday 29 March 2026 00:49:48 +0000 (0:00:01.864) 0:01:36.098 ********** 2026-03-29 00:52:32.439021 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.439027 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.439034 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.439040 | orchestrator | 2026-03-29 00:52:32.439046 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-29 00:52:32.439051 | orchestrator | Sunday 29 March 2026 00:50:13 +0000 (0:00:25.712) 0:02:01.810 ********** 2026-03-29 00:52:32.439058 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.439063 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.439067 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.439071 | orchestrator | 2026-03-29 00:52:32.439075 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-29 00:52:32.439078 | orchestrator | Sunday 29 March 2026 00:50:14 +0000 (0:00:00.675) 0:02:02.486 ********** 2026-03-29 00:52:32.439082 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.439086 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.439090 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.439093 | orchestrator | 2026-03-29 00:52:32.439097 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-29 00:52:32.439101 | orchestrator | Sunday 29 March 2026 00:50:15 +0000 (0:00:00.651) 0:02:03.138 ********** 2026-03-29 00:52:32.439105 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.439109 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.439113 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.439117 | orchestrator | 2026-03-29 00:52:32.439120 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-29 00:52:32.439124 | orchestrator | Sunday 29 March 2026 00:50:15 +0000 (0:00:00.691) 0:02:03.829 ********** 2026-03-29 00:52:32.439128 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.439132 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.439136 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.439139 | orchestrator | 2026-03-29 00:52:32.439143 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-29 00:52:32.439147 | orchestrator | Sunday 29 March 2026 00:50:17 +0000 (0:00:01.093) 0:02:04.923 ********** 2026-03-29 00:52:32.439151 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.439155 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.439158 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.439162 | orchestrator | 2026-03-29 00:52:32.439166 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-29 00:52:32.439170 | orchestrator | Sunday 29 March 2026 00:50:17 +0000 (0:00:00.347) 0:02:05.270 ********** 2026-03-29 00:52:32.439174 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.439177 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.439181 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.439185 | orchestrator | 2026-03-29 00:52:32.439189 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-29 00:52:32.439193 | orchestrator | Sunday 29 March 2026 00:50:18 +0000 (0:00:00.733) 0:02:06.003 ********** 2026-03-29 00:52:32.439197 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.439201 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.439204 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.439208 | orchestrator | 2026-03-29 00:52:32.439217 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-29 00:52:32.439221 | orchestrator | Sunday 29 March 2026 00:50:18 +0000 (0:00:00.652) 0:02:06.656 ********** 2026-03-29 00:52:32.439224 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.439228 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.439232 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.439236 | orchestrator | 2026-03-29 00:52:32.439240 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-29 00:52:32.439244 | orchestrator | Sunday 29 March 2026 00:50:19 +0000 (0:00:01.209) 0:02:07.865 ********** 2026-03-29 00:52:32.439247 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:52:32.439251 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:52:32.439255 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:52:32.439259 | orchestrator | 2026-03-29 00:52:32.439263 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-29 00:52:32.439267 | orchestrator | Sunday 29 March 2026 00:50:20 +0000 (0:00:00.803) 0:02:08.669 ********** 2026-03-29 00:52:32.439271 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.439275 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.439278 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.439282 | orchestrator | 2026-03-29 00:52:32.439286 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-29 00:52:32.439290 | orchestrator | Sunday 29 March 2026 00:50:21 +0000 (0:00:00.268) 0:02:08.937 ********** 2026-03-29 00:52:32.439293 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.439300 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.439305 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.439309 | orchestrator | 2026-03-29 00:52:32.439312 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-29 00:52:32.439316 | orchestrator | Sunday 29 March 2026 00:50:21 +0000 (0:00:00.286) 0:02:09.224 ********** 2026-03-29 00:52:32.439320 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.439324 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.439328 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.439332 | orchestrator | 2026-03-29 00:52:32.439336 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-29 00:52:32.439340 | orchestrator | Sunday 29 March 2026 00:50:22 +0000 (0:00:00.819) 0:02:10.043 ********** 2026-03-29 00:52:32.439344 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.439347 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.439353 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.439360 | orchestrator | 2026-03-29 00:52:32.439366 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-29 00:52:32.439373 | orchestrator | Sunday 29 March 2026 00:50:22 +0000 (0:00:00.585) 0:02:10.629 ********** 2026-03-29 00:52:32.439378 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 00:52:32.439388 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 00:52:32.439395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-29 00:52:32.439401 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 00:52:32.439408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 00:52:32.439414 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-29 00:52:32.439419 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 00:52:32.439425 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 00:52:32.439431 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-29 00:52:32.439442 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 00:52:32.439449 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-29 00:52:32.439455 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 00:52:32.439461 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 00:52:32.439467 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-29 00:52:32.439473 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 00:52:32.439480 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 00:52:32.439486 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-29 00:52:32.439493 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 00:52:32.439497 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-29 00:52:32.439501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-29 00:52:32.439504 | orchestrator | 2026-03-29 00:52:32.439508 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-29 00:52:32.439512 | orchestrator | 2026-03-29 00:52:32.439516 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-29 00:52:32.439520 | orchestrator | Sunday 29 March 2026 00:50:25 +0000 (0:00:02.982) 0:02:13.611 ********** 2026-03-29 00:52:32.439523 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:32.439527 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:32.439531 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:32.439535 | orchestrator | 2026-03-29 00:52:32.439539 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-29 00:52:32.439542 | orchestrator | Sunday 29 March 2026 00:50:26 +0000 (0:00:00.501) 0:02:14.113 ********** 2026-03-29 00:52:32.439546 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:32.439550 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:32.439554 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:32.439558 | orchestrator | 2026-03-29 00:52:32.439561 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-29 00:52:32.439565 | orchestrator | Sunday 29 March 2026 00:50:26 +0000 (0:00:00.596) 0:02:14.709 ********** 2026-03-29 00:52:32.439569 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:32.439573 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:32.439577 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:32.439580 | orchestrator | 2026-03-29 00:52:32.439584 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-29 00:52:32.439588 | orchestrator | Sunday 29 March 2026 00:50:27 +0000 (0:00:00.331) 0:02:15.041 ********** 2026-03-29 00:52:32.439592 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:52:32.439596 | orchestrator | 2026-03-29 00:52:32.439600 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-29 00:52:32.439603 | orchestrator | Sunday 29 March 2026 00:50:27 +0000 (0:00:00.672) 0:02:15.714 ********** 2026-03-29 00:52:32.439610 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.439614 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.439618 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.439622 | orchestrator | 2026-03-29 00:52:32.439626 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-29 00:52:32.439630 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.371) 0:02:16.085 ********** 2026-03-29 00:52:32.439633 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.439637 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.439644 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.439648 | orchestrator | 2026-03-29 00:52:32.439653 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-29 00:52:32.439657 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.321) 0:02:16.407 ********** 2026-03-29 00:52:32.439661 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.439664 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.439668 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.439672 | orchestrator | 2026-03-29 00:52:32.439676 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-29 00:52:32.439680 | orchestrator | Sunday 29 March 2026 00:50:28 +0000 (0:00:00.363) 0:02:16.770 ********** 2026-03-29 00:52:32.439684 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.439688 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.439692 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.439695 | orchestrator | 2026-03-29 00:52:32.439702 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-29 00:52:32.439706 | orchestrator | Sunday 29 March 2026 00:50:29 +0000 (0:00:00.708) 0:02:17.480 ********** 2026-03-29 00:52:32.439710 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.439714 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.439718 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.439722 | orchestrator | 2026-03-29 00:52:32.439726 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-29 00:52:32.439729 | orchestrator | Sunday 29 March 2026 00:50:30 +0000 (0:00:01.189) 0:02:18.669 ********** 2026-03-29 00:52:32.439734 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.439737 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.439741 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.439745 | orchestrator | 2026-03-29 00:52:32.439766 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-29 00:52:32.439770 | orchestrator | Sunday 29 March 2026 00:50:31 +0000 (0:00:01.157) 0:02:19.827 ********** 2026-03-29 00:52:32.439774 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:52:32.439777 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:52:32.439781 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:52:32.439786 | orchestrator | 2026-03-29 00:52:32.439789 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-29 00:52:32.439793 | orchestrator | 2026-03-29 00:52:32.439797 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-29 00:52:32.439801 | orchestrator | Sunday 29 March 2026 00:50:41 +0000 (0:00:09.692) 0:02:29.520 ********** 2026-03-29 00:52:32.439805 | orchestrator | ok: [testbed-manager] 2026-03-29 00:52:32.439809 | orchestrator | 2026-03-29 00:52:32.439812 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-29 00:52:32.439816 | orchestrator | Sunday 29 March 2026 00:50:42 +0000 (0:00:01.025) 0:02:30.545 ********** 2026-03-29 00:52:32.439820 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.439824 | orchestrator | 2026-03-29 00:52:32.439828 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 00:52:32.439831 | orchestrator | Sunday 29 March 2026 00:50:42 +0000 (0:00:00.354) 0:02:30.900 ********** 2026-03-29 00:52:32.439835 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 00:52:32.439839 | orchestrator | 2026-03-29 00:52:32.439843 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 00:52:32.439853 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.584) 0:02:31.484 ********** 2026-03-29 00:52:32.439859 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.439865 | orchestrator | 2026-03-29 00:52:32.439875 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-29 00:52:32.439883 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.728) 0:02:32.212 ********** 2026-03-29 00:52:32.439889 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.439938 | orchestrator | 2026-03-29 00:52:32.439953 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-29 00:52:32.439959 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.586) 0:02:32.799 ********** 2026-03-29 00:52:32.439966 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:52:32.439973 | orchestrator | 2026-03-29 00:52:32.439979 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-29 00:52:32.439986 | orchestrator | Sunday 29 March 2026 00:50:46 +0000 (0:00:01.290) 0:02:34.089 ********** 2026-03-29 00:52:32.439993 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:52:32.440001 | orchestrator | 2026-03-29 00:52:32.440008 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-29 00:52:32.440015 | orchestrator | Sunday 29 March 2026 00:50:46 +0000 (0:00:00.626) 0:02:34.716 ********** 2026-03-29 00:52:32.440021 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.440028 | orchestrator | 2026-03-29 00:52:32.440034 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-29 00:52:32.440040 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:00.516) 0:02:35.232 ********** 2026-03-29 00:52:32.440046 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.440052 | orchestrator | 2026-03-29 00:52:32.440060 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-29 00:52:32.440064 | orchestrator | 2026-03-29 00:52:32.440068 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-29 00:52:32.440072 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:00.396) 0:02:35.628 ********** 2026-03-29 00:52:32.440076 | orchestrator | ok: [testbed-manager] 2026-03-29 00:52:32.440080 | orchestrator | 2026-03-29 00:52:32.440084 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-29 00:52:32.440094 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:00.119) 0:02:35.748 ********** 2026-03-29 00:52:32.440099 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:52:32.440103 | orchestrator | 2026-03-29 00:52:32.440106 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-29 00:52:32.440110 | orchestrator | Sunday 29 March 2026 00:50:48 +0000 (0:00:00.207) 0:02:35.955 ********** 2026-03-29 00:52:32.440114 | orchestrator | ok: [testbed-manager] 2026-03-29 00:52:32.440118 | orchestrator | 2026-03-29 00:52:32.440122 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-29 00:52:32.440126 | orchestrator | Sunday 29 March 2026 00:50:48 +0000 (0:00:00.791) 0:02:36.747 ********** 2026-03-29 00:52:32.440129 | orchestrator | ok: [testbed-manager] 2026-03-29 00:52:32.440133 | orchestrator | 2026-03-29 00:52:32.440137 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-29 00:52:32.440141 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:01.401) 0:02:38.148 ********** 2026-03-29 00:52:32.440145 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.440149 | orchestrator | 2026-03-29 00:52:32.440153 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-29 00:52:32.440157 | orchestrator | Sunday 29 March 2026 00:50:51 +0000 (0:00:00.808) 0:02:38.957 ********** 2026-03-29 00:52:32.440161 | orchestrator | ok: [testbed-manager] 2026-03-29 00:52:32.440164 | orchestrator | 2026-03-29 00:52:32.440173 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-29 00:52:32.440177 | orchestrator | Sunday 29 March 2026 00:50:51 +0000 (0:00:00.459) 0:02:39.416 ********** 2026-03-29 00:52:32.440181 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.440185 | orchestrator | 2026-03-29 00:52:32.440189 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-29 00:52:32.440193 | orchestrator | Sunday 29 March 2026 00:50:58 +0000 (0:00:07.293) 0:02:46.710 ********** 2026-03-29 00:52:32.440197 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.440201 | orchestrator | 2026-03-29 00:52:32.440207 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-29 00:52:32.440217 | orchestrator | Sunday 29 March 2026 00:51:11 +0000 (0:00:12.863) 0:02:59.573 ********** 2026-03-29 00:52:32.440223 | orchestrator | ok: [testbed-manager] 2026-03-29 00:52:32.440232 | orchestrator | 2026-03-29 00:52:32.440240 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-29 00:52:32.440246 | orchestrator | 2026-03-29 00:52:32.440251 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-29 00:52:32.440257 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:00.548) 0:03:00.122 ********** 2026-03-29 00:52:32.440263 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.440269 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.440274 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.440280 | orchestrator | 2026-03-29 00:52:32.440286 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-29 00:52:32.440292 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:00.292) 0:03:00.415 ********** 2026-03-29 00:52:32.440298 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440304 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.440310 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.440316 | orchestrator | 2026-03-29 00:52:32.440322 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-29 00:52:32.440328 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:00.254) 0:03:00.669 ********** 2026-03-29 00:52:32.440334 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:52:32.440340 | orchestrator | 2026-03-29 00:52:32.440346 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-29 00:52:32.440352 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:00.627) 0:03:01.297 ********** 2026-03-29 00:52:32.440358 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:52:32.440366 | orchestrator | 2026-03-29 00:52:32.440374 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-29 00:52:32.440383 | orchestrator | Sunday 29 March 2026 00:51:14 +0000 (0:00:00.916) 0:03:02.213 ********** 2026-03-29 00:52:32.440388 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:52:32.440394 | orchestrator | 2026-03-29 00:52:32.440400 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-29 00:52:32.440406 | orchestrator | Sunday 29 March 2026 00:51:14 +0000 (0:00:00.668) 0:03:02.882 ********** 2026-03-29 00:52:32.440411 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440417 | orchestrator | 2026-03-29 00:52:32.440423 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-29 00:52:32.440428 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:00.106) 0:03:02.988 ********** 2026-03-29 00:52:32.440434 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:52:32.440452 | orchestrator | 2026-03-29 00:52:32.440459 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-29 00:52:32.440466 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:00.769) 0:03:03.757 ********** 2026-03-29 00:52:32.440473 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440478 | orchestrator | 2026-03-29 00:52:32.440484 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-29 00:52:32.440491 | orchestrator | Sunday 29 March 2026 00:51:15 +0000 (0:00:00.105) 0:03:03.863 ********** 2026-03-29 00:52:32.440498 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440505 | orchestrator | 2026-03-29 00:52:32.440511 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-29 00:52:32.440517 | orchestrator | Sunday 29 March 2026 00:51:16 +0000 (0:00:00.146) 0:03:04.010 ********** 2026-03-29 00:52:32.440524 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440530 | orchestrator | 2026-03-29 00:52:32.440536 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-29 00:52:32.440543 | orchestrator | Sunday 29 March 2026 00:51:16 +0000 (0:00:00.134) 0:03:04.145 ********** 2026-03-29 00:52:32.440557 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440563 | orchestrator | 2026-03-29 00:52:32.440575 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-29 00:52:32.440582 | orchestrator | Sunday 29 March 2026 00:51:16 +0000 (0:00:00.138) 0:03:04.283 ********** 2026-03-29 00:52:32.440586 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:52:32.440590 | orchestrator | 2026-03-29 00:52:32.440594 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-29 00:52:32.440598 | orchestrator | Sunday 29 March 2026 00:51:21 +0000 (0:00:04.813) 0:03:09.096 ********** 2026-03-29 00:52:32.440602 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-29 00:52:32.440606 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-29 00:52:32.440611 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-29 00:52:32.440615 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-29 00:52:32.440619 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-29 00:52:32.440623 | orchestrator | 2026-03-29 00:52:32.440627 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-29 00:52:32.440631 | orchestrator | Sunday 29 March 2026 00:52:02 +0000 (0:00:41.483) 0:03:50.580 ********** 2026-03-29 00:52:32.440640 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 00:52:32.440645 | orchestrator | 2026-03-29 00:52:32.440648 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-29 00:52:32.440653 | orchestrator | Sunday 29 March 2026 00:52:03 +0000 (0:00:01.151) 0:03:51.731 ********** 2026-03-29 00:52:32.440657 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:52:32.440661 | orchestrator | 2026-03-29 00:52:32.440665 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-29 00:52:32.440670 | orchestrator | Sunday 29 March 2026 00:52:05 +0000 (0:00:01.793) 0:03:53.525 ********** 2026-03-29 00:52:32.440678 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:52:32.440687 | orchestrator | 2026-03-29 00:52:32.440693 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-29 00:52:32.440699 | orchestrator | Sunday 29 March 2026 00:52:06 +0000 (0:00:01.140) 0:03:54.666 ********** 2026-03-29 00:52:32.440706 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440712 | orchestrator | 2026-03-29 00:52:32.440717 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-29 00:52:32.440723 | orchestrator | Sunday 29 March 2026 00:52:06 +0000 (0:00:00.149) 0:03:54.815 ********** 2026-03-29 00:52:32.440729 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-29 00:52:32.440735 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-29 00:52:32.440741 | orchestrator | 2026-03-29 00:52:32.440749 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-29 00:52:32.440756 | orchestrator | Sunday 29 March 2026 00:52:09 +0000 (0:00:02.382) 0:03:57.198 ********** 2026-03-29 00:52:32.440762 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.440767 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.440773 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.440779 | orchestrator | 2026-03-29 00:52:32.440784 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-29 00:52:32.440791 | orchestrator | Sunday 29 March 2026 00:52:09 +0000 (0:00:00.404) 0:03:57.602 ********** 2026-03-29 00:52:32.440797 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.440804 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.440811 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.440816 | orchestrator | 2026-03-29 00:52:32.440822 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-29 00:52:32.440828 | orchestrator | 2026-03-29 00:52:32.440840 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-29 00:52:32.440846 | orchestrator | Sunday 29 March 2026 00:52:10 +0000 (0:00:01.155) 0:03:58.757 ********** 2026-03-29 00:52:32.440852 | orchestrator | ok: [testbed-manager] 2026-03-29 00:52:32.440858 | orchestrator | 2026-03-29 00:52:32.440865 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-29 00:52:32.440870 | orchestrator | Sunday 29 March 2026 00:52:11 +0000 (0:00:00.172) 0:03:58.929 ********** 2026-03-29 00:52:32.440875 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-29 00:52:32.440881 | orchestrator | 2026-03-29 00:52:32.440887 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-29 00:52:32.440938 | orchestrator | Sunday 29 March 2026 00:52:11 +0000 (0:00:00.280) 0:03:59.210 ********** 2026-03-29 00:52:32.440947 | orchestrator | changed: [testbed-manager] 2026-03-29 00:52:32.440954 | orchestrator | 2026-03-29 00:52:32.440960 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-29 00:52:32.440966 | orchestrator | 2026-03-29 00:52:32.440972 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-29 00:52:32.440977 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:05.921) 0:04:05.131 ********** 2026-03-29 00:52:32.440983 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:52:32.440989 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:52:32.440994 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:52:32.441001 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:52:32.441007 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:52:32.441013 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:52:32.441018 | orchestrator | 2026-03-29 00:52:32.441023 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-29 00:52:32.441029 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:00.704) 0:04:05.836 ********** 2026-03-29 00:52:32.441035 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 00:52:32.441041 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 00:52:32.441053 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 00:52:32.441075 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 00:52:32.441081 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-29 00:52:32.441087 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-29 00:52:32.441093 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 00:52:32.441099 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 00:52:32.441106 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 00:52:32.441112 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 00:52:32.441119 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-29 00:52:32.441125 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-29 00:52:32.441140 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 00:52:32.441147 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 00:52:32.441153 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 00:52:32.441159 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 00:52:32.441173 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-29 00:52:32.441179 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 00:52:32.441192 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 00:52:32.441200 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-29 00:52:32.441206 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 00:52:32.441212 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-29 00:52:32.441219 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 00:52:32.441227 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 00:52:32.441233 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-29 00:52:32.441239 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 00:52:32.441244 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 00:52:32.441251 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-29 00:52:32.441258 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 00:52:32.441265 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-29 00:52:32.441271 | orchestrator | 2026-03-29 00:52:32.441278 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-29 00:52:32.441284 | orchestrator | Sunday 29 March 2026 00:52:30 +0000 (0:00:12.511) 0:04:18.348 ********** 2026-03-29 00:52:32.441290 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.441297 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.441303 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.441309 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.441316 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.441322 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.441328 | orchestrator | 2026-03-29 00:52:32.441335 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-29 00:52:32.441342 | orchestrator | Sunday 29 March 2026 00:52:31 +0000 (0:00:00.714) 0:04:19.062 ********** 2026-03-29 00:52:32.441348 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:52:32.441355 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:52:32.441362 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:52:32.441369 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:52:32.441376 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:52:32.441382 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:52:32.441388 | orchestrator | 2026-03-29 00:52:32.441395 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:52:32.441403 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:52:32.441414 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-29 00:52:32.441422 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 00:52:32.441428 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 00:52:32.441440 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 00:52:32.441446 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 00:52:32.441453 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 00:52:32.441465 | orchestrator | 2026-03-29 00:52:32.441472 | orchestrator | 2026-03-29 00:52:32.441480 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:52:32.441486 | orchestrator | Sunday 29 March 2026 00:52:31 +0000 (0:00:00.521) 0:04:19.584 ********** 2026-03-29 00:52:32.441492 | orchestrator | =============================================================================== 2026-03-29 00:52:32.441499 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.10s 2026-03-29 00:52:32.441506 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.48s 2026-03-29 00:52:32.441512 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.71s 2026-03-29 00:52:32.441527 | orchestrator | kubectl : Install required packages ------------------------------------ 12.86s 2026-03-29 00:52:32.441533 | orchestrator | Manage labels ---------------------------------------------------------- 12.51s 2026-03-29 00:52:32.441540 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.69s 2026-03-29 00:52:32.441546 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.29s 2026-03-29 00:52:32.441552 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.20s 2026-03-29 00:52:32.441559 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.92s 2026-03-29 00:52:32.441566 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.81s 2026-03-29 00:52:32.441572 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.98s 2026-03-29 00:52:32.441579 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.79s 2026-03-29 00:52:32.441586 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.38s 2026-03-29 00:52:32.441592 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.35s 2026-03-29 00:52:32.441599 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.32s 2026-03-29 00:52:32.441606 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.18s 2026-03-29 00:52:32.441612 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.02s 2026-03-29 00:52:32.441619 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.92s 2026-03-29 00:52:32.441625 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.86s 2026-03-29 00:52:32.441631 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.86s 2026-03-29 00:52:32.441637 | orchestrator | 2026-03-29 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:35.476700 | orchestrator | 2026-03-29 00:52:35 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:35.479639 | orchestrator | 2026-03-29 00:52:35 | INFO  | Task e6202d92-83ee-4920-bfc9-f402c1625ea7 is in state STARTED 2026-03-29 00:52:35.485952 | orchestrator | 2026-03-29 00:52:35 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:35.486541 | orchestrator | 2026-03-29 00:52:35 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:35.488623 | orchestrator | 2026-03-29 00:52:35 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:35.488763 | orchestrator | 2026-03-29 00:52:35 | INFO  | Task 3fb08039-4973-44c2-8700-334a9d751ad0 is in state STARTED 2026-03-29 00:52:35.488776 | orchestrator | 2026-03-29 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:38.521954 | orchestrator | 2026-03-29 00:52:38 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:38.522992 | orchestrator | 2026-03-29 00:52:38 | INFO  | Task e6202d92-83ee-4920-bfc9-f402c1625ea7 is in state STARTED 2026-03-29 00:52:38.524444 | orchestrator | 2026-03-29 00:52:38 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:38.525711 | orchestrator | 2026-03-29 00:52:38 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:38.539530 | orchestrator | 2026-03-29 00:52:38 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:38.544217 | orchestrator | 2026-03-29 00:52:38 | INFO  | Task 3fb08039-4973-44c2-8700-334a9d751ad0 is in state STARTED 2026-03-29 00:52:38.544298 | orchestrator | 2026-03-29 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:41.589444 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:41.592751 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task e6202d92-83ee-4920-bfc9-f402c1625ea7 is in state STARTED 2026-03-29 00:52:41.594433 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:41.596047 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:41.597534 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:41.598692 | orchestrator | 2026-03-29 00:52:41 | INFO  | Task 3fb08039-4973-44c2-8700-334a9d751ad0 is in state SUCCESS 2026-03-29 00:52:41.598948 | orchestrator | 2026-03-29 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:44.810236 | orchestrator | 2026-03-29 00:52:44 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:44.810300 | orchestrator | 2026-03-29 00:52:44 | INFO  | Task e6202d92-83ee-4920-bfc9-f402c1625ea7 is in state SUCCESS 2026-03-29 00:52:44.810310 | orchestrator | 2026-03-29 00:52:44 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:44.810317 | orchestrator | 2026-03-29 00:52:44 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:44.810324 | orchestrator | 2026-03-29 00:52:44 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:44.810330 | orchestrator | 2026-03-29 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:47.842105 | orchestrator | 2026-03-29 00:52:47 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:47.843004 | orchestrator | 2026-03-29 00:52:47 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:47.844055 | orchestrator | 2026-03-29 00:52:47 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:47.845075 | orchestrator | 2026-03-29 00:52:47 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:47.845102 | orchestrator | 2026-03-29 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:50.931088 | orchestrator | 2026-03-29 00:52:50 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:50.932611 | orchestrator | 2026-03-29 00:52:50 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:50.934997 | orchestrator | 2026-03-29 00:52:50 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:50.936454 | orchestrator | 2026-03-29 00:52:50 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:50.936515 | orchestrator | 2026-03-29 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:53.987008 | orchestrator | 2026-03-29 00:52:53 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:53.987082 | orchestrator | 2026-03-29 00:52:53 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:53.988814 | orchestrator | 2026-03-29 00:52:53 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:53.989752 | orchestrator | 2026-03-29 00:52:53 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:53.989784 | orchestrator | 2026-03-29 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:52:57.039795 | orchestrator | 2026-03-29 00:52:57 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:52:57.042917 | orchestrator | 2026-03-29 00:52:57 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:52:57.044738 | orchestrator | 2026-03-29 00:52:57 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:52:57.046694 | orchestrator | 2026-03-29 00:52:57 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:52:57.046965 | orchestrator | 2026-03-29 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:00.091567 | orchestrator | 2026-03-29 00:53:00 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:00.093625 | orchestrator | 2026-03-29 00:53:00 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:53:00.095761 | orchestrator | 2026-03-29 00:53:00 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:00.097566 | orchestrator | 2026-03-29 00:53:00 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:00.097694 | orchestrator | 2026-03-29 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:03.128478 | orchestrator | 2026-03-29 00:53:03 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:03.130227 | orchestrator | 2026-03-29 00:53:03 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:53:03.131962 | orchestrator | 2026-03-29 00:53:03 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:03.133974 | orchestrator | 2026-03-29 00:53:03 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:03.134111 | orchestrator | 2026-03-29 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:06.165653 | orchestrator | 2026-03-29 00:53:06 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:06.166899 | orchestrator | 2026-03-29 00:53:06 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:53:06.168753 | orchestrator | 2026-03-29 00:53:06 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:06.170781 | orchestrator | 2026-03-29 00:53:06 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:06.171276 | orchestrator | 2026-03-29 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:09.215050 | orchestrator | 2026-03-29 00:53:09 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:09.215919 | orchestrator | 2026-03-29 00:53:09 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:53:09.216795 | orchestrator | 2026-03-29 00:53:09 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:09.217917 | orchestrator | 2026-03-29 00:53:09 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:09.217968 | orchestrator | 2026-03-29 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:12.245211 | orchestrator | 2026-03-29 00:53:12 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:12.245583 | orchestrator | 2026-03-29 00:53:12 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state STARTED 2026-03-29 00:53:12.246320 | orchestrator | 2026-03-29 00:53:12 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:12.247487 | orchestrator | 2026-03-29 00:53:12 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:12.247527 | orchestrator | 2026-03-29 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:15.280116 | orchestrator | 2026-03-29 00:53:15 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:15.281249 | orchestrator | 2026-03-29 00:53:15.281278 | orchestrator | 2026-03-29 00:53:15.281283 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-29 00:53:15.281288 | orchestrator | 2026-03-29 00:53:15.281292 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 00:53:15.281296 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:00.217) 0:00:00.217 ********** 2026-03-29 00:53:15.281301 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 00:53:15.281305 | orchestrator | 2026-03-29 00:53:15.281310 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 00:53:15.281314 | orchestrator | Sunday 29 March 2026 00:52:39 +0000 (0:00:00.890) 0:00:01.107 ********** 2026-03-29 00:53:15.281318 | orchestrator | changed: [testbed-manager] 2026-03-29 00:53:15.281322 | orchestrator | 2026-03-29 00:53:15.281326 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-29 00:53:15.281330 | orchestrator | Sunday 29 March 2026 00:52:40 +0000 (0:00:01.270) 0:00:02.378 ********** 2026-03-29 00:53:15.281334 | orchestrator | changed: [testbed-manager] 2026-03-29 00:53:15.281338 | orchestrator | 2026-03-29 00:53:15.281342 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:53:15.281346 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:53:15.281351 | orchestrator | 2026-03-29 00:53:15.281355 | orchestrator | 2026-03-29 00:53:15.281359 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:53:15.281363 | orchestrator | Sunday 29 March 2026 00:52:40 +0000 (0:00:00.417) 0:00:02.795 ********** 2026-03-29 00:53:15.281366 | orchestrator | =============================================================================== 2026-03-29 00:53:15.281370 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.27s 2026-03-29 00:53:15.281374 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.89s 2026-03-29 00:53:15.281391 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2026-03-29 00:53:15.281397 | orchestrator | 2026-03-29 00:53:15.281403 | orchestrator | 2026-03-29 00:53:15.281408 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-29 00:53:15.281415 | orchestrator | 2026-03-29 00:53:15.281421 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-29 00:53:15.281427 | orchestrator | Sunday 29 March 2026 00:52:36 +0000 (0:00:00.181) 0:00:00.181 ********** 2026-03-29 00:53:15.281434 | orchestrator | ok: [testbed-manager] 2026-03-29 00:53:15.281441 | orchestrator | 2026-03-29 00:53:15.281447 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-29 00:53:15.281498 | orchestrator | Sunday 29 March 2026 00:52:37 +0000 (0:00:00.697) 0:00:00.879 ********** 2026-03-29 00:53:15.281564 | orchestrator | ok: [testbed-manager] 2026-03-29 00:53:15.281572 | orchestrator | 2026-03-29 00:53:15.281578 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-29 00:53:15.281584 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:00.612) 0:00:01.492 ********** 2026-03-29 00:53:15.281590 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-29 00:53:15.281596 | orchestrator | 2026-03-29 00:53:15.281601 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-29 00:53:15.281607 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:00.710) 0:00:02.202 ********** 2026-03-29 00:53:15.281613 | orchestrator | changed: [testbed-manager] 2026-03-29 00:53:15.281617 | orchestrator | 2026-03-29 00:53:15.281621 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-29 00:53:15.281625 | orchestrator | Sunday 29 March 2026 00:52:40 +0000 (0:00:01.467) 0:00:03.669 ********** 2026-03-29 00:53:15.281628 | orchestrator | changed: [testbed-manager] 2026-03-29 00:53:15.281632 | orchestrator | 2026-03-29 00:53:15.281636 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-29 00:53:15.281640 | orchestrator | Sunday 29 March 2026 00:52:40 +0000 (0:00:00.527) 0:00:04.197 ********** 2026-03-29 00:53:15.281643 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:53:15.281647 | orchestrator | 2026-03-29 00:53:15.281651 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-29 00:53:15.281655 | orchestrator | Sunday 29 March 2026 00:52:42 +0000 (0:00:01.480) 0:00:05.677 ********** 2026-03-29 00:53:15.281659 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 00:53:15.281662 | orchestrator | 2026-03-29 00:53:15.281666 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-29 00:53:15.281670 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.738) 0:00:06.415 ********** 2026-03-29 00:53:15.281673 | orchestrator | ok: [testbed-manager] 2026-03-29 00:53:15.281677 | orchestrator | 2026-03-29 00:53:15.281681 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-29 00:53:15.281685 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.357) 0:00:06.772 ********** 2026-03-29 00:53:15.281689 | orchestrator | ok: [testbed-manager] 2026-03-29 00:53:15.281693 | orchestrator | 2026-03-29 00:53:15.281697 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:53:15.281701 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:53:15.281704 | orchestrator | 2026-03-29 00:53:15.281708 | orchestrator | 2026-03-29 00:53:15.281712 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:53:15.281716 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.271) 0:00:07.047 ********** 2026-03-29 00:53:15.281719 | orchestrator | =============================================================================== 2026-03-29 00:53:15.281723 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.48s 2026-03-29 00:53:15.281727 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.47s 2026-03-29 00:53:15.281730 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2026-03-29 00:53:15.281742 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2026-03-29 00:53:15.281746 | orchestrator | Get home directory of operator user ------------------------------------- 0.70s 2026-03-29 00:53:15.281750 | orchestrator | Create .kube directory -------------------------------------------------- 0.61s 2026-03-29 00:53:15.281754 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.53s 2026-03-29 00:53:15.281757 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2026-03-29 00:53:15.281761 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-03-29 00:53:15.281765 | orchestrator | 2026-03-29 00:53:15.281817 | orchestrator | 2026-03-29 00:53:15 | INFO  | Task d536eab8-d31b-4975-a59c-deb6a3780f3f is in state SUCCESS 2026-03-29 00:53:15.282107 | orchestrator | 2026-03-29 00:53:15.282120 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-29 00:53:15.282124 | orchestrator | 2026-03-29 00:53:15.282128 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-29 00:53:15.282132 | orchestrator | Sunday 29 March 2026 00:50:58 +0000 (0:00:00.198) 0:00:00.198 ********** 2026-03-29 00:53:15.282136 | orchestrator | ok: [localhost] => { 2026-03-29 00:53:15.282141 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-29 00:53:15.282145 | orchestrator | } 2026-03-29 00:53:15.282149 | orchestrator | 2026-03-29 00:53:15.282153 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-29 00:53:15.282157 | orchestrator | Sunday 29 March 2026 00:50:58 +0000 (0:00:00.102) 0:00:00.300 ********** 2026-03-29 00:53:15.282163 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-29 00:53:15.282168 | orchestrator | ...ignoring 2026-03-29 00:53:15.282172 | orchestrator | 2026-03-29 00:53:15.282176 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-29 00:53:15.282180 | orchestrator | Sunday 29 March 2026 00:51:02 +0000 (0:00:03.054) 0:00:03.355 ********** 2026-03-29 00:53:15.282184 | orchestrator | skipping: [localhost] 2026-03-29 00:53:15.282187 | orchestrator | 2026-03-29 00:53:15.282191 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-29 00:53:15.282195 | orchestrator | Sunday 29 March 2026 00:51:02 +0000 (0:00:00.211) 0:00:03.567 ********** 2026-03-29 00:53:15.282199 | orchestrator | ok: [localhost] 2026-03-29 00:53:15.282203 | orchestrator | 2026-03-29 00:53:15.282207 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:53:15.282211 | orchestrator | 2026-03-29 00:53:15.282215 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:53:15.282218 | orchestrator | Sunday 29 March 2026 00:51:02 +0000 (0:00:00.315) 0:00:03.883 ********** 2026-03-29 00:53:15.282223 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:53:15.282229 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:53:15.282235 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:53:15.282243 | orchestrator | 2026-03-29 00:53:15.282251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:53:15.282260 | orchestrator | Sunday 29 March 2026 00:51:03 +0000 (0:00:00.554) 0:00:04.437 ********** 2026-03-29 00:53:15.282266 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-29 00:53:15.282272 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-29 00:53:15.282278 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-29 00:53:15.282284 | orchestrator | 2026-03-29 00:53:15.282290 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-29 00:53:15.282296 | orchestrator | 2026-03-29 00:53:15.282302 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 00:53:15.282308 | orchestrator | Sunday 29 March 2026 00:51:04 +0000 (0:00:01.499) 0:00:05.936 ********** 2026-03-29 00:53:15.282314 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:53:15.282320 | orchestrator | 2026-03-29 00:53:15.282327 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 00:53:15.282333 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.567) 0:00:06.504 ********** 2026-03-29 00:53:15.282340 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:53:15.282346 | orchestrator | 2026-03-29 00:53:15.282352 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-29 00:53:15.282359 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:01.032) 0:00:07.536 ********** 2026-03-29 00:53:15.282375 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.282383 | orchestrator | 2026-03-29 00:53:15.282388 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-29 00:53:15.282392 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:00.344) 0:00:07.881 ********** 2026-03-29 00:53:15.282395 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.282399 | orchestrator | 2026-03-29 00:53:15.282403 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-29 00:53:15.282407 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:00.525) 0:00:08.407 ********** 2026-03-29 00:53:15.282411 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.282414 | orchestrator | 2026-03-29 00:53:15.282418 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-29 00:53:15.282422 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:01.017) 0:00:09.425 ********** 2026-03-29 00:53:15.282426 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.282430 | orchestrator | 2026-03-29 00:53:15.282433 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 00:53:15.282437 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.566) 0:00:09.992 ********** 2026-03-29 00:53:15.282441 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:53:15.282445 | orchestrator | 2026-03-29 00:53:15.282449 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-29 00:53:15.282453 | orchestrator | Sunday 29 March 2026 00:51:09 +0000 (0:00:00.655) 0:00:10.648 ********** 2026-03-29 00:53:15.282519 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:53:15.282541 | orchestrator | 2026-03-29 00:53:15.282548 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-29 00:53:15.282554 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:00.967) 0:00:11.615 ********** 2026-03-29 00:53:15.282559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.282565 | orchestrator | 2026-03-29 00:53:15.282570 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-29 00:53:15.282576 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:00.532) 0:00:12.147 ********** 2026-03-29 00:53:15.282581 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.282587 | orchestrator | 2026-03-29 00:53:15.282606 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-29 00:53:15.282612 | orchestrator | Sunday 29 March 2026 00:51:11 +0000 (0:00:00.827) 0:00:12.975 ********** 2026-03-29 00:53:15.282625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.282636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.282651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.282657 | orchestrator | 2026-03-29 00:53:15.282663 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-29 00:53:15.282669 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:01.520) 0:00:14.496 ********** 2026-03-29 00:53:15.282682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.282692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.282704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.282711 | orchestrator | 2026-03-29 00:53:15.282717 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-29 00:53:15.282723 | orchestrator | Sunday 29 March 2026 00:51:16 +0000 (0:00:03.638) 0:00:18.134 ********** 2026-03-29 00:53:15.282728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 00:53:15.282736 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 00:53:15.282741 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-29 00:53:15.282745 | orchestrator | 2026-03-29 00:53:15.282750 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-29 00:53:15.282754 | orchestrator | Sunday 29 March 2026 00:51:18 +0000 (0:00:01.470) 0:00:19.605 ********** 2026-03-29 00:53:15.282758 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 00:53:15.282763 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 00:53:15.282767 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-29 00:53:15.282771 | orchestrator | 2026-03-29 00:53:15.282776 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-29 00:53:15.282780 | orchestrator | Sunday 29 March 2026 00:51:20 +0000 (0:00:02.244) 0:00:21.849 ********** 2026-03-29 00:53:15.282784 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 00:53:15.282788 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 00:53:15.282792 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-29 00:53:15.282797 | orchestrator | 2026-03-29 00:53:15.282804 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-29 00:53:15.282808 | orchestrator | Sunday 29 March 2026 00:51:21 +0000 (0:00:01.474) 0:00:23.324 ********** 2026-03-29 00:53:15.282813 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 00:53:15.282817 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 00:53:15.282821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-29 00:53:15.282825 | orchestrator | 2026-03-29 00:53:15.282830 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-29 00:53:15.282834 | orchestrator | Sunday 29 March 2026 00:51:24 +0000 (0:00:02.426) 0:00:25.750 ********** 2026-03-29 00:53:15.282949 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 00:53:15.282954 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 00:53:15.282961 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-29 00:53:15.282965 | orchestrator | 2026-03-29 00:53:15.282969 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-29 00:53:15.282973 | orchestrator | Sunday 29 March 2026 00:51:26 +0000 (0:00:01.679) 0:00:27.430 ********** 2026-03-29 00:53:15.282976 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 00:53:15.282980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 00:53:15.282984 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-29 00:53:15.282987 | orchestrator | 2026-03-29 00:53:15.282991 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-29 00:53:15.282995 | orchestrator | Sunday 29 March 2026 00:51:27 +0000 (0:00:01.505) 0:00:28.935 ********** 2026-03-29 00:53:15.282999 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.283002 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:53:15.283006 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:53:15.283010 | orchestrator | 2026-03-29 00:53:15.283014 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-29 00:53:15.283017 | orchestrator | Sunday 29 March 2026 00:51:28 +0000 (0:00:00.453) 0:00:29.389 ********** 2026-03-29 00:53:15.283022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.283026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.283037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:53:15.283046 | orchestrator | 2026-03-29 00:53:15.283050 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-29 00:53:15.283054 | orchestrator | Sunday 29 March 2026 00:51:29 +0000 (0:00:01.446) 0:00:30.836 ********** 2026-03-29 00:53:15.283058 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:53:15.283061 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:53:15.283065 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:53:15.283069 | orchestrator | 2026-03-29 00:53:15.283073 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-29 00:53:15.283076 | orchestrator | Sunday 29 March 2026 00:51:30 +0000 (0:00:00.783) 0:00:31.620 ********** 2026-03-29 00:53:15.283080 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:53:15.283084 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:53:15.283088 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:53:15.283092 | orchestrator | 2026-03-29 00:53:15.283095 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-29 00:53:15.283099 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:08.772) 0:00:40.393 ********** 2026-03-29 00:53:15.283103 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:53:15.283107 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:53:15.283110 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:53:15.283114 | orchestrator | 2026-03-29 00:53:15.283118 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 00:53:15.283122 | orchestrator | 2026-03-29 00:53:15.283125 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 00:53:15.283129 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:00.439) 0:00:40.832 ********** 2026-03-29 00:53:15.283133 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:53:15.283137 | orchestrator | 2026-03-29 00:53:15.283141 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 00:53:15.283144 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.609) 0:00:41.441 ********** 2026-03-29 00:53:15.283148 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:53:15.283152 | orchestrator | 2026-03-29 00:53:15.283155 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 00:53:15.283159 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.224) 0:00:41.665 ********** 2026-03-29 00:53:15.283163 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:53:15.283167 | orchestrator | 2026-03-29 00:53:15.283170 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 00:53:15.283174 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:01.706) 0:00:43.372 ********** 2026-03-29 00:53:15.283178 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:53:15.283182 | orchestrator | 2026-03-29 00:53:15.283185 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 00:53:15.283189 | orchestrator | 2026-03-29 00:53:15.283193 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 00:53:15.283197 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:56.228) 0:01:39.600 ********** 2026-03-29 00:53:15.283200 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:53:15.283208 | orchestrator | 2026-03-29 00:53:15.283211 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 00:53:15.283215 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:00.734) 0:01:40.335 ********** 2026-03-29 00:53:15.283219 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:53:15.283223 | orchestrator | 2026-03-29 00:53:15.283226 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 00:53:15.283230 | orchestrator | Sunday 29 March 2026 00:52:39 +0000 (0:00:00.457) 0:01:40.792 ********** 2026-03-29 00:53:15.283234 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:53:15.283238 | orchestrator | 2026-03-29 00:53:15.283242 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 00:53:15.283245 | orchestrator | Sunday 29 March 2026 00:52:41 +0000 (0:00:02.025) 0:01:42.817 ********** 2026-03-29 00:53:15.283249 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:53:15.283253 | orchestrator | 2026-03-29 00:53:15.283257 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-29 00:53:15.283260 | orchestrator | 2026-03-29 00:53:15.283264 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-29 00:53:15.283268 | orchestrator | Sunday 29 March 2026 00:52:55 +0000 (0:00:14.181) 0:01:56.999 ********** 2026-03-29 00:53:15.283271 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:53:15.283275 | orchestrator | 2026-03-29 00:53:15.283279 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-29 00:53:15.283283 | orchestrator | Sunday 29 March 2026 00:52:56 +0000 (0:00:00.588) 0:01:57.587 ********** 2026-03-29 00:53:15.283286 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:53:15.283290 | orchestrator | 2026-03-29 00:53:15.283294 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-29 00:53:15.283301 | orchestrator | Sunday 29 March 2026 00:52:56 +0000 (0:00:00.243) 0:01:57.831 ********** 2026-03-29 00:53:15.283305 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:53:15.283308 | orchestrator | 2026-03-29 00:53:15.283312 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-29 00:53:15.283316 | orchestrator | Sunday 29 March 2026 00:52:58 +0000 (0:00:01.543) 0:01:59.374 ********** 2026-03-29 00:53:15.283320 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:53:15.283323 | orchestrator | 2026-03-29 00:53:15.283327 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-29 00:53:15.283331 | orchestrator | 2026-03-29 00:53:15.283335 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-29 00:53:15.283338 | orchestrator | Sunday 29 March 2026 00:53:11 +0000 (0:00:13.173) 0:02:12.548 ********** 2026-03-29 00:53:15.283342 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:53:15.283346 | orchestrator | 2026-03-29 00:53:15.283350 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-29 00:53:15.283353 | orchestrator | Sunday 29 March 2026 00:53:11 +0000 (0:00:00.557) 0:02:13.106 ********** 2026-03-29 00:53:15.283357 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 00:53:15.283361 | orchestrator | enable_outward_rabbitmq_True 2026-03-29 00:53:15.283367 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 00:53:15.283371 | orchestrator | outward_rabbitmq_restart 2026-03-29 00:53:15.283375 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:53:15.283379 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:53:15.283382 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:53:15.283386 | orchestrator | 2026-03-29 00:53:15.283390 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-29 00:53:15.283393 | orchestrator | skipping: no hosts matched 2026-03-29 00:53:15.283397 | orchestrator | 2026-03-29 00:53:15.283401 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-29 00:53:15.283404 | orchestrator | skipping: no hosts matched 2026-03-29 00:53:15.283408 | orchestrator | 2026-03-29 00:53:15.283412 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-29 00:53:15.283419 | orchestrator | skipping: no hosts matched 2026-03-29 00:53:15.283423 | orchestrator | 2026-03-29 00:53:15.283426 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:53:15.283431 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:53:15.283435 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-29 00:53:15.283438 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:53:15.283442 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 00:53:15.283446 | orchestrator | 2026-03-29 00:53:15.283450 | orchestrator | 2026-03-29 00:53:15.283453 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:53:15.283457 | orchestrator | Sunday 29 March 2026 00:53:14 +0000 (0:00:02.336) 0:02:15.442 ********** 2026-03-29 00:53:15.283461 | orchestrator | =============================================================================== 2026-03-29 00:53:15.283465 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.58s 2026-03-29 00:53:15.283468 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.77s 2026-03-29 00:53:15.283472 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.28s 2026-03-29 00:53:15.283476 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.64s 2026-03-29 00:53:15.283480 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.05s 2026-03-29 00:53:15.283483 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.43s 2026-03-29 00:53:15.283487 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.34s 2026-03-29 00:53:15.283491 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.24s 2026-03-29 00:53:15.283494 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2026-03-29 00:53:15.283498 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.68s 2026-03-29 00:53:15.283502 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.52s 2026-03-29 00:53:15.283506 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.51s 2026-03-29 00:53:15.283509 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.50s 2026-03-29 00:53:15.283513 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.47s 2026-03-29 00:53:15.283517 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.47s 2026-03-29 00:53:15.283520 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.45s 2026-03-29 00:53:15.283524 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2026-03-29 00:53:15.283528 | orchestrator | rabbitmq : Check if running RabbitMQ is at most one version behind ------ 1.02s 2026-03-29 00:53:15.283532 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.97s 2026-03-29 00:53:15.283535 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.93s 2026-03-29 00:53:15.283541 | orchestrator | 2026-03-29 00:53:15 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:15.284104 | orchestrator | 2026-03-29 00:53:15 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:15.285032 | orchestrator | 2026-03-29 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:18.318243 | orchestrator | 2026-03-29 00:53:18 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:18.318554 | orchestrator | 2026-03-29 00:53:18 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:18.319247 | orchestrator | 2026-03-29 00:53:18 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:18.319433 | orchestrator | 2026-03-29 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:21.354598 | orchestrator | 2026-03-29 00:53:21 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:21.354693 | orchestrator | 2026-03-29 00:53:21 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:21.354926 | orchestrator | 2026-03-29 00:53:21 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:21.355091 | orchestrator | 2026-03-29 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:24.392416 | orchestrator | 2026-03-29 00:53:24 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:24.394426 | orchestrator | 2026-03-29 00:53:24 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:24.396051 | orchestrator | 2026-03-29 00:53:24 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:24.396094 | orchestrator | 2026-03-29 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:27.430458 | orchestrator | 2026-03-29 00:53:27 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:27.431147 | orchestrator | 2026-03-29 00:53:27 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:27.432445 | orchestrator | 2026-03-29 00:53:27 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:27.432484 | orchestrator | 2026-03-29 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:30.468419 | orchestrator | 2026-03-29 00:53:30 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:30.469145 | orchestrator | 2026-03-29 00:53:30 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:30.470886 | orchestrator | 2026-03-29 00:53:30 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:30.471560 | orchestrator | 2026-03-29 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:33.505178 | orchestrator | 2026-03-29 00:53:33 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:33.505408 | orchestrator | 2026-03-29 00:53:33 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:33.506360 | orchestrator | 2026-03-29 00:53:33 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:33.506398 | orchestrator | 2026-03-29 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:36.555663 | orchestrator | 2026-03-29 00:53:36 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:36.562360 | orchestrator | 2026-03-29 00:53:36 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:36.562958 | orchestrator | 2026-03-29 00:53:36 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:36.563295 | orchestrator | 2026-03-29 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:39.607378 | orchestrator | 2026-03-29 00:53:39 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:39.608276 | orchestrator | 2026-03-29 00:53:39 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:39.609589 | orchestrator | 2026-03-29 00:53:39 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:39.609624 | orchestrator | 2026-03-29 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:42.646450 | orchestrator | 2026-03-29 00:53:42 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:42.646795 | orchestrator | 2026-03-29 00:53:42 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:42.648543 | orchestrator | 2026-03-29 00:53:42 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:42.649693 | orchestrator | 2026-03-29 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:45.685944 | orchestrator | 2026-03-29 00:53:45 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:45.687695 | orchestrator | 2026-03-29 00:53:45 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:45.689846 | orchestrator | 2026-03-29 00:53:45 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:45.690168 | orchestrator | 2026-03-29 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:48.727467 | orchestrator | 2026-03-29 00:53:48 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:48.729047 | orchestrator | 2026-03-29 00:53:48 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:48.730556 | orchestrator | 2026-03-29 00:53:48 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:48.730612 | orchestrator | 2026-03-29 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:51.770096 | orchestrator | 2026-03-29 00:53:51 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:51.773985 | orchestrator | 2026-03-29 00:53:51 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:51.775151 | orchestrator | 2026-03-29 00:53:51 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:51.775204 | orchestrator | 2026-03-29 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:54.831297 | orchestrator | 2026-03-29 00:53:54 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:54.832073 | orchestrator | 2026-03-29 00:53:54 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:54.832826 | orchestrator | 2026-03-29 00:53:54 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:54.832860 | orchestrator | 2026-03-29 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:53:57.859345 | orchestrator | 2026-03-29 00:53:57 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:53:57.859653 | orchestrator | 2026-03-29 00:53:57 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:53:57.860737 | orchestrator | 2026-03-29 00:53:57 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:53:57.860764 | orchestrator | 2026-03-29 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:00.897288 | orchestrator | 2026-03-29 00:54:00 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:00.897380 | orchestrator | 2026-03-29 00:54:00 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:54:00.897752 | orchestrator | 2026-03-29 00:54:00 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:00.897775 | orchestrator | 2026-03-29 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:03.938407 | orchestrator | 2026-03-29 00:54:03 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:03.938485 | orchestrator | 2026-03-29 00:54:03 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state STARTED 2026-03-29 00:54:03.940378 | orchestrator | 2026-03-29 00:54:03 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:03.940412 | orchestrator | 2026-03-29 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:06.969013 | orchestrator | 2026-03-29 00:54:06 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:06.971104 | orchestrator | 2026-03-29 00:54:06.971203 | orchestrator | 2026-03-29 00:54:06 | INFO  | Task d4deeade-b1c3-41c8-bac0-740f4cf6c48d is in state SUCCESS 2026-03-29 00:54:06.971989 | orchestrator | 2026-03-29 00:54:06.972017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:54:06.972022 | orchestrator | 2026-03-29 00:54:06.972027 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:54:06.972031 | orchestrator | Sunday 29 March 2026 00:51:50 +0000 (0:00:00.149) 0:00:00.149 ********** 2026-03-29 00:54:06.972036 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:54:06.972041 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:54:06.972045 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:54:06.972049 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.972053 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.972057 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.972061 | orchestrator | 2026-03-29 00:54:06.972065 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:54:06.972069 | orchestrator | Sunday 29 March 2026 00:51:51 +0000 (0:00:00.809) 0:00:00.958 ********** 2026-03-29 00:54:06.972073 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-29 00:54:06.972078 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-29 00:54:06.972081 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-29 00:54:06.972085 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-29 00:54:06.972089 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-29 00:54:06.972093 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-29 00:54:06.972097 | orchestrator | 2026-03-29 00:54:06.972101 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-29 00:54:06.972105 | orchestrator | 2026-03-29 00:54:06.972108 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-29 00:54:06.972112 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:00.772) 0:00:01.730 ********** 2026-03-29 00:54:06.972131 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:54:06.972136 | orchestrator | 2026-03-29 00:54:06.972140 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-29 00:54:06.972144 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:01.013) 0:00:02.744 ********** 2026-03-29 00:54:06.972149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972199 | orchestrator | 2026-03-29 00:54:06.972203 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-29 00:54:06.972207 | orchestrator | Sunday 29 March 2026 00:51:54 +0000 (0:00:01.060) 0:00:03.804 ********** 2026-03-29 00:54:06.972252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972520 | orchestrator | 2026-03-29 00:54:06.972525 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-29 00:54:06.972529 | orchestrator | Sunday 29 March 2026 00:51:55 +0000 (0:00:01.459) 0:00:05.264 ********** 2026-03-29 00:54:06.972533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972607 | orchestrator | 2026-03-29 00:54:06.972613 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-29 00:54:06.972620 | orchestrator | Sunday 29 March 2026 00:51:57 +0000 (0:00:01.569) 0:00:06.833 ********** 2026-03-29 00:54:06.972626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972676 | orchestrator | 2026-03-29 00:54:06.972682 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-29 00:54:06.972689 | orchestrator | Sunday 29 March 2026 00:51:59 +0000 (0:00:02.124) 0:00:08.958 ********** 2026-03-29 00:54:06.972696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.972747 | orchestrator | 2026-03-29 00:54:06.972754 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-29 00:54:06.972761 | orchestrator | Sunday 29 March 2026 00:52:00 +0000 (0:00:01.298) 0:00:10.257 ********** 2026-03-29 00:54:06.972769 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:54:06.972826 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.972834 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:54:06.972840 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:54:06.972847 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.972853 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.972860 | orchestrator | 2026-03-29 00:54:06.972867 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-29 00:54:06.972874 | orchestrator | Sunday 29 March 2026 00:52:03 +0000 (0:00:02.570) 0:00:12.827 ********** 2026-03-29 00:54:06.972881 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-29 00:54:06.972888 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-29 00:54:06.972895 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-29 00:54:06.972906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-29 00:54:06.972913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-29 00:54:06.972920 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-29 00:54:06.972926 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:54:06.972938 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:54:06.972945 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:54:06.972952 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:54:06.972959 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:54:06.972966 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-29 00:54:06.972972 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:54:06.972981 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:54:06.972988 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:54:06.972998 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:54:06.973005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:54:06.973012 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-29 00:54:06.973019 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:54:06.973026 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:54:06.973033 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:54:06.973040 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:54:06.973046 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:54:06.973053 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-29 00:54:06.973060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:54:06.973066 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:54:06.973073 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:54:06.973080 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:54:06.973087 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:54:06.973094 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-29 00:54:06.973101 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:54:06.973108 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:54:06.973114 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:54:06.973122 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:54:06.973128 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:54:06.973135 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 00:54:06.973146 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-29 00:54:06.973154 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 00:54:06.973162 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-29 00:54:06.973169 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 00:54:06.973179 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-29 00:54:06.973186 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 00:54:06.973193 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-29 00:54:06.973200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 00:54:06.973207 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-29 00:54:06.973214 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-29 00:54:06.973221 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-29 00:54:06.973228 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-29 00:54:06.973235 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-29 00:54:06.973242 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 00:54:06.973252 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 00:54:06.973259 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-29 00:54:06.973265 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 00:54:06.973272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-29 00:54:06.973279 | orchestrator | 2026-03-29 00:54:06.973286 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:54:06.973293 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:20.323) 0:00:33.151 ********** 2026-03-29 00:54:06.973299 | orchestrator | 2026-03-29 00:54:06.973306 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:54:06.973313 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.064) 0:00:33.215 ********** 2026-03-29 00:54:06.973320 | orchestrator | 2026-03-29 00:54:06.973327 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:54:06.973334 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.062) 0:00:33.278 ********** 2026-03-29 00:54:06.973341 | orchestrator | 2026-03-29 00:54:06.973347 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:54:06.973354 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.064) 0:00:33.342 ********** 2026-03-29 00:54:06.973361 | orchestrator | 2026-03-29 00:54:06.973368 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:54:06.973375 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.059) 0:00:33.402 ********** 2026-03-29 00:54:06.973386 | orchestrator | 2026-03-29 00:54:06.973393 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-29 00:54:06.973399 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.060) 0:00:33.463 ********** 2026-03-29 00:54:06.973406 | orchestrator | 2026-03-29 00:54:06.973413 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-29 00:54:06.973419 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.062) 0:00:33.525 ********** 2026-03-29 00:54:06.973426 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:54:06.973433 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.973440 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.973446 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:54:06.973453 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.973460 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:54:06.973467 | orchestrator | 2026-03-29 00:54:06.973474 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-29 00:54:06.973481 | orchestrator | Sunday 29 March 2026 00:52:26 +0000 (0:00:02.512) 0:00:36.038 ********** 2026-03-29 00:54:06.973487 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.973494 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:54:06.973500 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:54:06.973507 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:54:06.973514 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.973521 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.973527 | orchestrator | 2026-03-29 00:54:06.973534 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-29 00:54:06.973541 | orchestrator | 2026-03-29 00:54:06.973548 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 00:54:06.973555 | orchestrator | Sunday 29 March 2026 00:53:00 +0000 (0:00:34.532) 0:01:10.571 ********** 2026-03-29 00:54:06.973562 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:54:06.973568 | orchestrator | 2026-03-29 00:54:06.973576 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 00:54:06.973582 | orchestrator | Sunday 29 March 2026 00:53:01 +0000 (0:00:00.771) 0:01:11.342 ********** 2026-03-29 00:54:06.973589 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:54:06.973596 | orchestrator | 2026-03-29 00:54:06.973607 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-29 00:54:06.973613 | orchestrator | Sunday 29 March 2026 00:53:02 +0000 (0:00:00.539) 0:01:11.881 ********** 2026-03-29 00:54:06.973620 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.973627 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.973634 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.973641 | orchestrator | 2026-03-29 00:54:06.973647 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-29 00:54:06.973654 | orchestrator | Sunday 29 March 2026 00:53:03 +0000 (0:00:00.972) 0:01:12.854 ********** 2026-03-29 00:54:06.973662 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.973668 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.973675 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.973681 | orchestrator | 2026-03-29 00:54:06.973688 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-29 00:54:06.973695 | orchestrator | Sunday 29 March 2026 00:53:03 +0000 (0:00:00.338) 0:01:13.192 ********** 2026-03-29 00:54:06.973701 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.973707 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.973714 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.973721 | orchestrator | 2026-03-29 00:54:06.973727 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-29 00:54:06.973734 | orchestrator | Sunday 29 March 2026 00:53:03 +0000 (0:00:00.341) 0:01:13.534 ********** 2026-03-29 00:54:06.973741 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.973748 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.973764 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.973772 | orchestrator | 2026-03-29 00:54:06.973841 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-29 00:54:06.973848 | orchestrator | Sunday 29 March 2026 00:53:04 +0000 (0:00:00.291) 0:01:13.825 ********** 2026-03-29 00:54:06.973854 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.973860 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.973867 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.973872 | orchestrator | 2026-03-29 00:54:06.973879 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-29 00:54:06.973885 | orchestrator | Sunday 29 March 2026 00:53:04 +0000 (0:00:00.439) 0:01:14.265 ********** 2026-03-29 00:54:06.973892 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.973898 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.973904 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.973910 | orchestrator | 2026-03-29 00:54:06.973917 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-29 00:54:06.973923 | orchestrator | Sunday 29 March 2026 00:53:04 +0000 (0:00:00.290) 0:01:14.555 ********** 2026-03-29 00:54:06.973929 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.973935 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.973941 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.973947 | orchestrator | 2026-03-29 00:54:06.973954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-29 00:54:06.973960 | orchestrator | Sunday 29 March 2026 00:53:05 +0000 (0:00:00.258) 0:01:14.814 ********** 2026-03-29 00:54:06.973966 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.973972 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.973978 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.973984 | orchestrator | 2026-03-29 00:54:06.973990 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-29 00:54:06.973997 | orchestrator | Sunday 29 March 2026 00:53:05 +0000 (0:00:00.279) 0:01:15.093 ********** 2026-03-29 00:54:06.974002 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974009 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974043 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974050 | orchestrator | 2026-03-29 00:54:06.974056 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-29 00:54:06.974062 | orchestrator | Sunday 29 March 2026 00:53:05 +0000 (0:00:00.412) 0:01:15.506 ********** 2026-03-29 00:54:06.974068 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974074 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974081 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974087 | orchestrator | 2026-03-29 00:54:06.974093 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-29 00:54:06.974099 | orchestrator | Sunday 29 March 2026 00:53:06 +0000 (0:00:00.265) 0:01:15.772 ********** 2026-03-29 00:54:06.974105 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974111 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974118 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974124 | orchestrator | 2026-03-29 00:54:06.974130 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-29 00:54:06.974136 | orchestrator | Sunday 29 March 2026 00:53:06 +0000 (0:00:00.259) 0:01:16.031 ********** 2026-03-29 00:54:06.974142 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974148 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974154 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974160 | orchestrator | 2026-03-29 00:54:06.974166 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-29 00:54:06.974172 | orchestrator | Sunday 29 March 2026 00:53:06 +0000 (0:00:00.274) 0:01:16.306 ********** 2026-03-29 00:54:06.974179 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974184 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974190 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974201 | orchestrator | 2026-03-29 00:54:06.974207 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-29 00:54:06.974213 | orchestrator | Sunday 29 March 2026 00:53:07 +0000 (0:00:00.405) 0:01:16.711 ********** 2026-03-29 00:54:06.974219 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974225 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974231 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974238 | orchestrator | 2026-03-29 00:54:06.974244 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-29 00:54:06.974250 | orchestrator | Sunday 29 March 2026 00:53:07 +0000 (0:00:00.279) 0:01:16.991 ********** 2026-03-29 00:54:06.974256 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974262 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974268 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974275 | orchestrator | 2026-03-29 00:54:06.974285 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-29 00:54:06.974292 | orchestrator | Sunday 29 March 2026 00:53:07 +0000 (0:00:00.291) 0:01:17.282 ********** 2026-03-29 00:54:06.974298 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974304 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974310 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974317 | orchestrator | 2026-03-29 00:54:06.974323 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-29 00:54:06.974329 | orchestrator | Sunday 29 March 2026 00:53:07 +0000 (0:00:00.265) 0:01:17.548 ********** 2026-03-29 00:54:06.974335 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974341 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974347 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974353 | orchestrator | 2026-03-29 00:54:06.974359 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-29 00:54:06.974366 | orchestrator | Sunday 29 March 2026 00:53:08 +0000 (0:00:00.302) 0:01:17.851 ********** 2026-03-29 00:54:06.974372 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:54:06.974378 | orchestrator | 2026-03-29 00:54:06.974384 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-29 00:54:06.974419 | orchestrator | Sunday 29 March 2026 00:53:09 +0000 (0:00:00.971) 0:01:18.822 ********** 2026-03-29 00:54:06.974426 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.974432 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.974437 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.974443 | orchestrator | 2026-03-29 00:54:06.974449 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-29 00:54:06.974456 | orchestrator | Sunday 29 March 2026 00:53:09 +0000 (0:00:00.468) 0:01:19.291 ********** 2026-03-29 00:54:06.974462 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.974471 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.974476 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.974483 | orchestrator | 2026-03-29 00:54:06.974489 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-29 00:54:06.974495 | orchestrator | Sunday 29 March 2026 00:53:10 +0000 (0:00:00.515) 0:01:19.807 ********** 2026-03-29 00:54:06.974501 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974507 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974512 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974519 | orchestrator | 2026-03-29 00:54:06.974525 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-29 00:54:06.974532 | orchestrator | Sunday 29 March 2026 00:53:10 +0000 (0:00:00.600) 0:01:20.408 ********** 2026-03-29 00:54:06.974538 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974545 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974550 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974556 | orchestrator | 2026-03-29 00:54:06.974562 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-29 00:54:06.974572 | orchestrator | Sunday 29 March 2026 00:53:11 +0000 (0:00:00.327) 0:01:20.735 ********** 2026-03-29 00:54:06.974576 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974580 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974584 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974587 | orchestrator | 2026-03-29 00:54:06.974591 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-29 00:54:06.974595 | orchestrator | Sunday 29 March 2026 00:53:11 +0000 (0:00:00.375) 0:01:21.111 ********** 2026-03-29 00:54:06.974599 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974603 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974606 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974610 | orchestrator | 2026-03-29 00:54:06.974614 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-29 00:54:06.974618 | orchestrator | Sunday 29 March 2026 00:53:11 +0000 (0:00:00.497) 0:01:21.609 ********** 2026-03-29 00:54:06.974622 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974625 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974629 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974633 | orchestrator | 2026-03-29 00:54:06.974637 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-29 00:54:06.974641 | orchestrator | Sunday 29 March 2026 00:53:12 +0000 (0:00:00.598) 0:01:22.207 ********** 2026-03-29 00:54:06.974644 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.974648 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.974652 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.974656 | orchestrator | 2026-03-29 00:54:06.974660 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-29 00:54:06.974663 | orchestrator | Sunday 29 March 2026 00:53:12 +0000 (0:00:00.330) 0:01:22.538 ********** 2026-03-29 00:54:06.974668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974731 | orchestrator | 2026-03-29 00:54:06.974737 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-29 00:54:06.974744 | orchestrator | Sunday 29 March 2026 00:53:14 +0000 (0:00:01.413) 0:01:23.951 ********** 2026-03-29 00:54:06.974750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974822 | orchestrator | 2026-03-29 00:54:06.974825 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-29 00:54:06.974829 | orchestrator | Sunday 29 March 2026 00:53:18 +0000 (0:00:04.001) 0:01:27.952 ********** 2026-03-29 00:54:06.974833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.974880 | orchestrator | 2026-03-29 00:54:06.974884 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:54:06.974887 | orchestrator | Sunday 29 March 2026 00:53:20 +0000 (0:00:02.500) 0:01:30.453 ********** 2026-03-29 00:54:06.974891 | orchestrator | 2026-03-29 00:54:06.974895 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:54:06.974899 | orchestrator | Sunday 29 March 2026 00:53:20 +0000 (0:00:00.064) 0:01:30.518 ********** 2026-03-29 00:54:06.974903 | orchestrator | 2026-03-29 00:54:06.974906 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:54:06.974910 | orchestrator | Sunday 29 March 2026 00:53:20 +0000 (0:00:00.063) 0:01:30.581 ********** 2026-03-29 00:54:06.974914 | orchestrator | 2026-03-29 00:54:06.974918 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-29 00:54:06.974922 | orchestrator | Sunday 29 March 2026 00:53:20 +0000 (0:00:00.066) 0:01:30.648 ********** 2026-03-29 00:54:06.974925 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.974929 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.974933 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.974937 | orchestrator | 2026-03-29 00:54:06.974941 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-29 00:54:06.974945 | orchestrator | Sunday 29 March 2026 00:53:23 +0000 (0:00:02.483) 0:01:33.133 ********** 2026-03-29 00:54:06.974949 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.974953 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.974956 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.974960 | orchestrator | 2026-03-29 00:54:06.974964 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-29 00:54:06.974968 | orchestrator | Sunday 29 March 2026 00:53:26 +0000 (0:00:02.619) 0:01:35.752 ********** 2026-03-29 00:54:06.974972 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.974975 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.974979 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.974988 | orchestrator | 2026-03-29 00:54:06.974992 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-29 00:54:06.974996 | orchestrator | Sunday 29 March 2026 00:53:28 +0000 (0:00:02.763) 0:01:38.516 ********** 2026-03-29 00:54:06.975000 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.975004 | orchestrator | 2026-03-29 00:54:06.975008 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-29 00:54:06.975012 | orchestrator | Sunday 29 March 2026 00:53:28 +0000 (0:00:00.130) 0:01:38.646 ********** 2026-03-29 00:54:06.975016 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975020 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975023 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975027 | orchestrator | 2026-03-29 00:54:06.975033 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-29 00:54:06.975038 | orchestrator | Sunday 29 March 2026 00:53:29 +0000 (0:00:00.796) 0:01:39.443 ********** 2026-03-29 00:54:06.975041 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.975045 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.975049 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.975053 | orchestrator | 2026-03-29 00:54:06.975056 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-29 00:54:06.975060 | orchestrator | Sunday 29 March 2026 00:53:30 +0000 (0:00:00.595) 0:01:40.038 ********** 2026-03-29 00:54:06.975064 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975068 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975072 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975075 | orchestrator | 2026-03-29 00:54:06.975079 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-29 00:54:06.975083 | orchestrator | Sunday 29 March 2026 00:53:31 +0000 (0:00:00.770) 0:01:40.809 ********** 2026-03-29 00:54:06.975087 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.975091 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.975095 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.975098 | orchestrator | 2026-03-29 00:54:06.975102 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-29 00:54:06.975106 | orchestrator | Sunday 29 March 2026 00:53:31 +0000 (0:00:00.842) 0:01:41.651 ********** 2026-03-29 00:54:06.975110 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975114 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975118 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975122 | orchestrator | 2026-03-29 00:54:06.975125 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-29 00:54:06.975129 | orchestrator | Sunday 29 March 2026 00:53:32 +0000 (0:00:00.820) 0:01:42.472 ********** 2026-03-29 00:54:06.975133 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975137 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975141 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975145 | orchestrator | 2026-03-29 00:54:06.975151 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-29 00:54:06.975155 | orchestrator | Sunday 29 March 2026 00:53:33 +0000 (0:00:00.787) 0:01:43.260 ********** 2026-03-29 00:54:06.975159 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975163 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975167 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975170 | orchestrator | 2026-03-29 00:54:06.975174 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-29 00:54:06.975178 | orchestrator | Sunday 29 March 2026 00:53:33 +0000 (0:00:00.292) 0:01:43.553 ********** 2026-03-29 00:54:06.975182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975186 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975195 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975199 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975203 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975207 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975214 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975222 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975226 | orchestrator | 2026-03-29 00:54:06.975232 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-29 00:54:06.975236 | orchestrator | Sunday 29 March 2026 00:53:35 +0000 (0:00:01.461) 0:01:45.015 ********** 2026-03-29 00:54:06.975240 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975249 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975261 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975294 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975308 | orchestrator | 2026-03-29 00:54:06.975317 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-29 00:54:06.975323 | orchestrator | Sunday 29 March 2026 00:53:39 +0000 (0:00:04.122) 0:01:49.137 ********** 2026-03-29 00:54:06.975332 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975344 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975350 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975362 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975394 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 00:54:06.975400 | orchestrator | 2026-03-29 00:54:06.975406 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:54:06.975413 | orchestrator | Sunday 29 March 2026 00:53:42 +0000 (0:00:03.015) 0:01:52.153 ********** 2026-03-29 00:54:06.975420 | orchestrator | 2026-03-29 00:54:06.975423 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:54:06.975427 | orchestrator | Sunday 29 March 2026 00:53:42 +0000 (0:00:00.079) 0:01:52.232 ********** 2026-03-29 00:54:06.975436 | orchestrator | 2026-03-29 00:54:06.975439 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-29 00:54:06.975443 | orchestrator | Sunday 29 March 2026 00:53:42 +0000 (0:00:00.077) 0:01:52.310 ********** 2026-03-29 00:54:06.975447 | orchestrator | 2026-03-29 00:54:06.975451 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-29 00:54:06.975457 | orchestrator | Sunday 29 March 2026 00:53:42 +0000 (0:00:00.066) 0:01:52.376 ********** 2026-03-29 00:54:06.975461 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.975465 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.975469 | orchestrator | 2026-03-29 00:54:06.975472 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-29 00:54:06.975476 | orchestrator | Sunday 29 March 2026 00:53:48 +0000 (0:00:06.180) 0:01:58.556 ********** 2026-03-29 00:54:06.975480 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.975484 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.975487 | orchestrator | 2026-03-29 00:54:06.975492 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-29 00:54:06.975498 | orchestrator | Sunday 29 March 2026 00:53:55 +0000 (0:00:06.175) 0:02:04.731 ********** 2026-03-29 00:54:06.975504 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:54:06.975510 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:54:06.975519 | orchestrator | 2026-03-29 00:54:06.975525 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-29 00:54:06.975531 | orchestrator | Sunday 29 March 2026 00:54:01 +0000 (0:00:06.605) 0:02:11.337 ********** 2026-03-29 00:54:06.975536 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:54:06.975542 | orchestrator | 2026-03-29 00:54:06.975548 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-29 00:54:06.975553 | orchestrator | Sunday 29 March 2026 00:54:01 +0000 (0:00:00.150) 0:02:11.487 ********** 2026-03-29 00:54:06.975558 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975563 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975569 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975575 | orchestrator | 2026-03-29 00:54:06.975581 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-29 00:54:06.975586 | orchestrator | Sunday 29 March 2026 00:54:02 +0000 (0:00:00.733) 0:02:12.221 ********** 2026-03-29 00:54:06.975592 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.975598 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.975605 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.975611 | orchestrator | 2026-03-29 00:54:06.975617 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-29 00:54:06.975623 | orchestrator | Sunday 29 March 2026 00:54:03 +0000 (0:00:00.685) 0:02:12.906 ********** 2026-03-29 00:54:06.975629 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975635 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975641 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975647 | orchestrator | 2026-03-29 00:54:06.975653 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-29 00:54:06.975660 | orchestrator | Sunday 29 March 2026 00:54:03 +0000 (0:00:00.782) 0:02:13.689 ********** 2026-03-29 00:54:06.975665 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:54:06.975672 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:54:06.975678 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:54:06.975684 | orchestrator | 2026-03-29 00:54:06.975690 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-29 00:54:06.975696 | orchestrator | Sunday 29 March 2026 00:54:04 +0000 (0:00:00.674) 0:02:14.364 ********** 2026-03-29 00:54:06.975702 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975708 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975714 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975720 | orchestrator | 2026-03-29 00:54:06.975726 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-29 00:54:06.975731 | orchestrator | Sunday 29 March 2026 00:54:05 +0000 (0:00:00.735) 0:02:15.100 ********** 2026-03-29 00:54:06.975744 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:54:06.975750 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:54:06.975756 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:54:06.975761 | orchestrator | 2026-03-29 00:54:06.975768 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:54:06.975798 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 00:54:06.975808 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-29 00:54:06.975819 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-29 00:54:06.975825 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:54:06.975831 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:54:06.975837 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 00:54:06.975843 | orchestrator | 2026-03-29 00:54:06.975849 | orchestrator | 2026-03-29 00:54:06.975855 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:54:06.975861 | orchestrator | Sunday 29 March 2026 00:54:06 +0000 (0:00:00.917) 0:02:16.017 ********** 2026-03-29 00:54:06.975866 | orchestrator | =============================================================================== 2026-03-29 00:54:06.975872 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.53s 2026-03-29 00:54:06.975878 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.32s 2026-03-29 00:54:06.975884 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.37s 2026-03-29 00:54:06.975890 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.80s 2026-03-29 00:54:06.975896 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.66s 2026-03-29 00:54:06.975906 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.12s 2026-03-29 00:54:06.975911 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.00s 2026-03-29 00:54:06.975917 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.02s 2026-03-29 00:54:06.975923 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2026-03-29 00:54:06.975929 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.51s 2026-03-29 00:54:06.975935 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.50s 2026-03-29 00:54:06.975941 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.12s 2026-03-29 00:54:06.975946 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.57s 2026-03-29 00:54:06.975952 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2026-03-29 00:54:06.975958 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.46s 2026-03-29 00:54:06.975964 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2026-03-29 00:54:06.975969 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.30s 2026-03-29 00:54:06.975975 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.06s 2026-03-29 00:54:06.975981 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.01s 2026-03-29 00:54:06.975987 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 0.97s 2026-03-29 00:54:06.975997 | orchestrator | 2026-03-29 00:54:06 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:06.976004 | orchestrator | 2026-03-29 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:10.009051 | orchestrator | 2026-03-29 00:54:10 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:10.010720 | orchestrator | 2026-03-29 00:54:10 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:10.011370 | orchestrator | 2026-03-29 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:13.053624 | orchestrator | 2026-03-29 00:54:13 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:13.057047 | orchestrator | 2026-03-29 00:54:13 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:13.057107 | orchestrator | 2026-03-29 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:16.097267 | orchestrator | 2026-03-29 00:54:16 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:16.097336 | orchestrator | 2026-03-29 00:54:16 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:16.097343 | orchestrator | 2026-03-29 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:19.145751 | orchestrator | 2026-03-29 00:54:19 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:19.147195 | orchestrator | 2026-03-29 00:54:19 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:19.147489 | orchestrator | 2026-03-29 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:22.191299 | orchestrator | 2026-03-29 00:54:22 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:22.192943 | orchestrator | 2026-03-29 00:54:22 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:22.193006 | orchestrator | 2026-03-29 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:25.242062 | orchestrator | 2026-03-29 00:54:25 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:25.242608 | orchestrator | 2026-03-29 00:54:25 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:25.242666 | orchestrator | 2026-03-29 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:28.284093 | orchestrator | 2026-03-29 00:54:28 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:28.286676 | orchestrator | 2026-03-29 00:54:28 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:28.286916 | orchestrator | 2026-03-29 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:31.320012 | orchestrator | 2026-03-29 00:54:31 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:31.320920 | orchestrator | 2026-03-29 00:54:31 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:31.320966 | orchestrator | 2026-03-29 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:34.360784 | orchestrator | 2026-03-29 00:54:34 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:34.361263 | orchestrator | 2026-03-29 00:54:34 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:34.361496 | orchestrator | 2026-03-29 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:37.398802 | orchestrator | 2026-03-29 00:54:37 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:37.401467 | orchestrator | 2026-03-29 00:54:37 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:37.401531 | orchestrator | 2026-03-29 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:40.442540 | orchestrator | 2026-03-29 00:54:40 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:40.443994 | orchestrator | 2026-03-29 00:54:40 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:40.444229 | orchestrator | 2026-03-29 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:43.492861 | orchestrator | 2026-03-29 00:54:43 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:43.494115 | orchestrator | 2026-03-29 00:54:43 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:43.494323 | orchestrator | 2026-03-29 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:46.527327 | orchestrator | 2026-03-29 00:54:46 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:46.528038 | orchestrator | 2026-03-29 00:54:46 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:46.528076 | orchestrator | 2026-03-29 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:49.564895 | orchestrator | 2026-03-29 00:54:49 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:49.568061 | orchestrator | 2026-03-29 00:54:49 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:49.568147 | orchestrator | 2026-03-29 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:52.618545 | orchestrator | 2026-03-29 00:54:52 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:52.620715 | orchestrator | 2026-03-29 00:54:52 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:52.620865 | orchestrator | 2026-03-29 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:55.661912 | orchestrator | 2026-03-29 00:54:55 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:55.663315 | orchestrator | 2026-03-29 00:54:55 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:55.663448 | orchestrator | 2026-03-29 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:54:58.711470 | orchestrator | 2026-03-29 00:54:58 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:54:58.712178 | orchestrator | 2026-03-29 00:54:58 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:54:58.712217 | orchestrator | 2026-03-29 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:01.747416 | orchestrator | 2026-03-29 00:55:01 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:01.747850 | orchestrator | 2026-03-29 00:55:01 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:01.747867 | orchestrator | 2026-03-29 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:04.777618 | orchestrator | 2026-03-29 00:55:04 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:04.778268 | orchestrator | 2026-03-29 00:55:04 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:04.778351 | orchestrator | 2026-03-29 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:07.811990 | orchestrator | 2026-03-29 00:55:07 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:07.812464 | orchestrator | 2026-03-29 00:55:07 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:07.812483 | orchestrator | 2026-03-29 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:10.849975 | orchestrator | 2026-03-29 00:55:10 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:10.850134 | orchestrator | 2026-03-29 00:55:10 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:10.850160 | orchestrator | 2026-03-29 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:13.899558 | orchestrator | 2026-03-29 00:55:13 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:13.899648 | orchestrator | 2026-03-29 00:55:13 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:13.899662 | orchestrator | 2026-03-29 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:16.938332 | orchestrator | 2026-03-29 00:55:16 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:16.939727 | orchestrator | 2026-03-29 00:55:16 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:16.940350 | orchestrator | 2026-03-29 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:19.985042 | orchestrator | 2026-03-29 00:55:19 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:19.985354 | orchestrator | 2026-03-29 00:55:19 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:19.985380 | orchestrator | 2026-03-29 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:23.019439 | orchestrator | 2026-03-29 00:55:23 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:23.022781 | orchestrator | 2026-03-29 00:55:23 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:23.022858 | orchestrator | 2026-03-29 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:26.061585 | orchestrator | 2026-03-29 00:55:26 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:26.062121 | orchestrator | 2026-03-29 00:55:26 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:26.062160 | orchestrator | 2026-03-29 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:29.102926 | orchestrator | 2026-03-29 00:55:29 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:29.104076 | orchestrator | 2026-03-29 00:55:29 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:29.104133 | orchestrator | 2026-03-29 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:32.138320 | orchestrator | 2026-03-29 00:55:32 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:32.138369 | orchestrator | 2026-03-29 00:55:32 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:32.138374 | orchestrator | 2026-03-29 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:35.169536 | orchestrator | 2026-03-29 00:55:35 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:35.169873 | orchestrator | 2026-03-29 00:55:35 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:35.169900 | orchestrator | 2026-03-29 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:38.216227 | orchestrator | 2026-03-29 00:55:38 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:38.218631 | orchestrator | 2026-03-29 00:55:38 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:38.218719 | orchestrator | 2026-03-29 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:41.258261 | orchestrator | 2026-03-29 00:55:41 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:41.258312 | orchestrator | 2026-03-29 00:55:41 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:41.258318 | orchestrator | 2026-03-29 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:44.296952 | orchestrator | 2026-03-29 00:55:44 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:44.297337 | orchestrator | 2026-03-29 00:55:44 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:44.297600 | orchestrator | 2026-03-29 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:47.345774 | orchestrator | 2026-03-29 00:55:47 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:47.347541 | orchestrator | 2026-03-29 00:55:47 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:47.347594 | orchestrator | 2026-03-29 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:50.397103 | orchestrator | 2026-03-29 00:55:50 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:50.399610 | orchestrator | 2026-03-29 00:55:50 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:50.399764 | orchestrator | 2026-03-29 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:53.448210 | orchestrator | 2026-03-29 00:55:53 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:53.449914 | orchestrator | 2026-03-29 00:55:53 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:53.449958 | orchestrator | 2026-03-29 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:56.490701 | orchestrator | 2026-03-29 00:55:56 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:56.491726 | orchestrator | 2026-03-29 00:55:56 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:56.492552 | orchestrator | 2026-03-29 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:55:59.548175 | orchestrator | 2026-03-29 00:55:59 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:55:59.551632 | orchestrator | 2026-03-29 00:55:59 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:55:59.551790 | orchestrator | 2026-03-29 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:02.594939 | orchestrator | 2026-03-29 00:56:02 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:02.596579 | orchestrator | 2026-03-29 00:56:02 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:02.596615 | orchestrator | 2026-03-29 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:05.652723 | orchestrator | 2026-03-29 00:56:05 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:05.654837 | orchestrator | 2026-03-29 00:56:05 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:05.654875 | orchestrator | 2026-03-29 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:08.691870 | orchestrator | 2026-03-29 00:56:08 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:08.693324 | orchestrator | 2026-03-29 00:56:08 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:08.693379 | orchestrator | 2026-03-29 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:11.735687 | orchestrator | 2026-03-29 00:56:11 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:11.738642 | orchestrator | 2026-03-29 00:56:11 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:11.738683 | orchestrator | 2026-03-29 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:14.781951 | orchestrator | 2026-03-29 00:56:14 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:14.784340 | orchestrator | 2026-03-29 00:56:14 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:14.784984 | orchestrator | 2026-03-29 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:17.827448 | orchestrator | 2026-03-29 00:56:17 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:17.828178 | orchestrator | 2026-03-29 00:56:17 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:17.828224 | orchestrator | 2026-03-29 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:20.858773 | orchestrator | 2026-03-29 00:56:20 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:20.860106 | orchestrator | 2026-03-29 00:56:20 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:20.860257 | orchestrator | 2026-03-29 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:23.899817 | orchestrator | 2026-03-29 00:56:23 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:23.902142 | orchestrator | 2026-03-29 00:56:23 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:23.902227 | orchestrator | 2026-03-29 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:26.943169 | orchestrator | 2026-03-29 00:56:26 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:26.944276 | orchestrator | 2026-03-29 00:56:26 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:26.944312 | orchestrator | 2026-03-29 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:29.994363 | orchestrator | 2026-03-29 00:56:29 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:29.996090 | orchestrator | 2026-03-29 00:56:29 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:29.996191 | orchestrator | 2026-03-29 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:33.041565 | orchestrator | 2026-03-29 00:56:33 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:33.042774 | orchestrator | 2026-03-29 00:56:33 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:33.042792 | orchestrator | 2026-03-29 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:36.089826 | orchestrator | 2026-03-29 00:56:36 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:36.091656 | orchestrator | 2026-03-29 00:56:36 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:36.091766 | orchestrator | 2026-03-29 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:39.135937 | orchestrator | 2026-03-29 00:56:39 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:39.139252 | orchestrator | 2026-03-29 00:56:39 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:39.139301 | orchestrator | 2026-03-29 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:42.194963 | orchestrator | 2026-03-29 00:56:42 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:42.199191 | orchestrator | 2026-03-29 00:56:42 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:42.199283 | orchestrator | 2026-03-29 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:45.244181 | orchestrator | 2026-03-29 00:56:45 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:45.245530 | orchestrator | 2026-03-29 00:56:45 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:45.245630 | orchestrator | 2026-03-29 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:48.308524 | orchestrator | 2026-03-29 00:56:48 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:48.308675 | orchestrator | 2026-03-29 00:56:48 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:48.308692 | orchestrator | 2026-03-29 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:51.348310 | orchestrator | 2026-03-29 00:56:51 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state STARTED 2026-03-29 00:56:51.349881 | orchestrator | 2026-03-29 00:56:51 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:51.350561 | orchestrator | 2026-03-29 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:54.387502 | orchestrator | 2026-03-29 00:56:54 | INFO  | Task fdf4a2ff-37ee-4326-a298-298b9553c4ab is in state SUCCESS 2026-03-29 00:56:54.388296 | orchestrator | 2026-03-29 00:56:54.388329 | orchestrator | 2026-03-29 00:56:54.388335 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:56:54.388341 | orchestrator | 2026-03-29 00:56:54.388345 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:56:54.388350 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.245) 0:00:00.245 ********** 2026-03-29 00:56:54.388355 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.388361 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.388365 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.388369 | orchestrator | 2026-03-29 00:56:54.388374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:56:54.388378 | orchestrator | Sunday 29 March 2026 00:50:43 +0000 (0:00:00.256) 0:00:00.501 ********** 2026-03-29 00:56:54.388383 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-29 00:56:54.388388 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-29 00:56:54.388393 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-29 00:56:54.388397 | orchestrator | 2026-03-29 00:56:54.388401 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-29 00:56:54.388405 | orchestrator | 2026-03-29 00:56:54.388438 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 00:56:54.388443 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.447) 0:00:00.948 ********** 2026-03-29 00:56:54.388447 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.388451 | orchestrator | 2026-03-29 00:56:54.388456 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-29 00:56:54.388460 | orchestrator | Sunday 29 March 2026 00:50:44 +0000 (0:00:00.629) 0:00:01.578 ********** 2026-03-29 00:56:54.388464 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.388469 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.388473 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.388477 | orchestrator | 2026-03-29 00:56:54.388482 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-29 00:56:54.388486 | orchestrator | Sunday 29 March 2026 00:50:45 +0000 (0:00:00.659) 0:00:02.237 ********** 2026-03-29 00:56:54.388490 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.388494 | orchestrator | 2026-03-29 00:56:54.388498 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-29 00:56:54.388502 | orchestrator | Sunday 29 March 2026 00:50:46 +0000 (0:00:01.050) 0:00:03.288 ********** 2026-03-29 00:56:54.388505 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.388509 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.388513 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.388516 | orchestrator | 2026-03-29 00:56:54.388980 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-29 00:56:54.388990 | orchestrator | Sunday 29 March 2026 00:50:47 +0000 (0:00:00.745) 0:00:04.034 ********** 2026-03-29 00:56:54.388995 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:56:54.388999 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:56:54.389003 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:56:54.389007 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:56:54.389011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:56:54.389015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-29 00:56:54.389018 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 00:56:54.389023 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 00:56:54.389027 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-29 00:56:54.389031 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 00:56:54.389034 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 00:56:54.389038 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-29 00:56:54.389042 | orchestrator | 2026-03-29 00:56:54.389046 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 00:56:54.389049 | orchestrator | Sunday 29 March 2026 00:50:49 +0000 (0:00:02.278) 0:00:06.312 ********** 2026-03-29 00:56:54.389053 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-29 00:56:54.389058 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-29 00:56:54.389062 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-29 00:56:54.389065 | orchestrator | 2026-03-29 00:56:54.389070 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-29 00:56:54.389076 | orchestrator | Sunday 29 March 2026 00:50:50 +0000 (0:00:00.911) 0:00:07.223 ********** 2026-03-29 00:56:54.389095 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-29 00:56:54.389102 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-29 00:56:54.389108 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-29 00:56:54.389113 | orchestrator | 2026-03-29 00:56:54.389119 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 00:56:54.389125 | orchestrator | Sunday 29 March 2026 00:50:51 +0000 (0:00:01.375) 0:00:08.599 ********** 2026-03-29 00:56:54.389130 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-29 00:56:54.389136 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.389154 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-29 00:56:54.389159 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.389165 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-29 00:56:54.389171 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.389176 | orchestrator | 2026-03-29 00:56:54.389182 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-29 00:56:54.389188 | orchestrator | Sunday 29 March 2026 00:50:52 +0000 (0:00:00.869) 0:00:09.468 ********** 2026-03-29 00:56:54.389241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.389293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.389297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.389301 | orchestrator | 2026-03-29 00:56:54.389304 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-29 00:56:54.389308 | orchestrator | Sunday 29 March 2026 00:50:54 +0000 (0:00:01.961) 0:00:11.429 ********** 2026-03-29 00:56:54.389312 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.389316 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.389320 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.389323 | orchestrator | 2026-03-29 00:56:54.389327 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-29 00:56:54.389331 | orchestrator | Sunday 29 March 2026 00:50:56 +0000 (0:00:01.416) 0:00:12.846 ********** 2026-03-29 00:56:54.389334 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-29 00:56:54.389338 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-29 00:56:54.389342 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-29 00:56:54.389346 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-29 00:56:54.389349 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-29 00:56:54.389353 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-29 00:56:54.389357 | orchestrator | 2026-03-29 00:56:54.389360 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-29 00:56:54.389368 | orchestrator | Sunday 29 March 2026 00:50:58 +0000 (0:00:01.962) 0:00:14.808 ********** 2026-03-29 00:56:54.389372 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.389376 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.389379 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.389383 | orchestrator | 2026-03-29 00:56:54.389387 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-29 00:56:54.389390 | orchestrator | Sunday 29 March 2026 00:50:59 +0000 (0:00:01.742) 0:00:16.551 ********** 2026-03-29 00:56:54.389394 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.389398 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.389402 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.389405 | orchestrator | 2026-03-29 00:56:54.389409 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-29 00:56:54.389413 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:01.355) 0:00:17.907 ********** 2026-03-29 00:56:54.389417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.389426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.389623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.389638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:56:54.389642 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.389646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.389656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.389660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.389678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:56:54.389682 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.389686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.389693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.389697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.389708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:56:54.389712 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.389716 | orchestrator | 2026-03-29 00:56:54.389720 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-29 00:56:54.389724 | orchestrator | Sunday 29 March 2026 00:51:03 +0000 (0:00:02.026) 0:00:19.936 ********** 2026-03-29 00:56:54.389728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.389764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:56:54.389768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.389791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.389798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:56:54.389809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44', '__omit_place_holder__688b2379132520a06093fc7848ecf9ff729e9d44'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-29 00:56:54.389813 | orchestrator | 2026-03-29 00:56:54.389817 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-29 00:56:54.389820 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:03.848) 0:00:23.785 ********** 2026-03-29 00:56:54.389824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.389866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.389870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.389874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.389878 | orchestrator | 2026-03-29 00:56:54.389882 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-29 00:56:54.389886 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:03.447) 0:00:27.233 ********** 2026-03-29 00:56:54.389890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 00:56:54.389896 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 00:56:54.389900 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-29 00:56:54.389905 | orchestrator | 2026-03-29 00:56:54.389911 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-29 00:56:54.389917 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:02.415) 0:00:29.649 ********** 2026-03-29 00:56:54.389923 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 00:56:54.389928 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 00:56:54.389939 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-29 00:56:54.389944 | orchestrator | 2026-03-29 00:56:54.389950 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-29 00:56:54.389957 | orchestrator | Sunday 29 March 2026 00:51:18 +0000 (0:00:05.570) 0:00:35.219 ********** 2026-03-29 00:56:54.389966 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.389973 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.389978 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.389984 | orchestrator | 2026-03-29 00:56:54.390135 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-29 00:56:54.390141 | orchestrator | Sunday 29 March 2026 00:51:19 +0000 (0:00:00.618) 0:00:35.837 ********** 2026-03-29 00:56:54.390145 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 00:56:54.390151 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 00:56:54.390155 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-29 00:56:54.390159 | orchestrator | 2026-03-29 00:56:54.390162 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-29 00:56:54.390166 | orchestrator | Sunday 29 March 2026 00:51:21 +0000 (0:00:02.355) 0:00:38.192 ********** 2026-03-29 00:56:54.390170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 00:56:54.390174 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 00:56:54.390178 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-29 00:56:54.390182 | orchestrator | 2026-03-29 00:56:54.390185 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-29 00:56:54.390190 | orchestrator | Sunday 29 March 2026 00:51:24 +0000 (0:00:02.604) 0:00:40.797 ********** 2026-03-29 00:56:54.390194 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-29 00:56:54.390198 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-29 00:56:54.390201 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-29 00:56:54.390205 | orchestrator | 2026-03-29 00:56:54.390209 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-29 00:56:54.390213 | orchestrator | Sunday 29 March 2026 00:51:25 +0000 (0:00:01.538) 0:00:42.335 ********** 2026-03-29 00:56:54.390216 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-29 00:56:54.390220 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-29 00:56:54.390224 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-29 00:56:54.390227 | orchestrator | 2026-03-29 00:56:54.390231 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-29 00:56:54.390235 | orchestrator | Sunday 29 March 2026 00:51:27 +0000 (0:00:01.842) 0:00:44.178 ********** 2026-03-29 00:56:54.390238 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.390242 | orchestrator | 2026-03-29 00:56:54.390246 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-29 00:56:54.390250 | orchestrator | Sunday 29 March 2026 00:51:28 +0000 (0:00:00.855) 0:00:45.033 ********** 2026-03-29 00:56:54.390254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.390271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.391095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.391357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.391366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.391373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.391381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.391398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.391426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.391434 | orchestrator | 2026-03-29 00:56:54.391440 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-29 00:56:54.391446 | orchestrator | Sunday 29 March 2026 00:51:31 +0000 (0:00:03.238) 0:00:48.272 ********** 2026-03-29 00:56:54.391457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.391464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.391470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.391476 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.391482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.391495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.391518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.391525 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.391534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.391540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.391546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.391552 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.391559 | orchestrator | 2026-03-29 00:56:54.391565 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-29 00:56:54.391572 | orchestrator | Sunday 29 March 2026 00:51:32 +0000 (0:00:00.546) 0:00:48.819 ********** 2026-03-29 00:56:54.391629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.391642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.391666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.391673 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.391680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.391690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.392023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.392033 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.392041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.392054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.392061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.392068 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.392074 | orchestrator | 2026-03-29 00:56:54.392081 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 00:56:54.392087 | orchestrator | Sunday 29 March 2026 00:51:32 +0000 (0:00:00.734) 0:00:49.554 ********** 2026-03-29 00:56:54.392137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.392151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.392158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.392164 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.392170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.392182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.392188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.392194 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.392263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.392271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.392281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.392288 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.392294 | orchestrator | 2026-03-29 00:56:54.392300 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 00:56:54.392306 | orchestrator | Sunday 29 March 2026 00:51:33 +0000 (0:00:00.800) 0:00:50.354 ********** 2026-03-29 00:56:54.392312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.392699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.392717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.392723 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.392731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.392795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.392809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.392815 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.392822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.392877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393109 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.393113 | orchestrator | 2026-03-29 00:56:54.393117 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 00:56:54.393121 | orchestrator | Sunday 29 March 2026 00:51:34 +0000 (0:00:00.617) 0:00:50.971 ********** 2026-03-29 00:56:54.393125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393186 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.393190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393208 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.393212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393252 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.393256 | orchestrator | 2026-03-29 00:56:54.393260 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-29 00:56:54.393263 | orchestrator | Sunday 29 March 2026 00:51:35 +0000 (0:00:00.928) 0:00:51.900 ********** 2026-03-29 00:56:54.393271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393287 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.393291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393350 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.393354 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.393357 | orchestrator | 2026-03-29 00:56:54.393361 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-29 00:56:54.393366 | orchestrator | Sunday 29 March 2026 00:51:36 +0000 (0:00:01.311) 0:00:53.212 ********** 2026-03-29 00:56:54.393369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393408 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.393415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.393423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.393427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.393431 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.394222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.394241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.394247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.394251 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.394255 | orchestrator | 2026-03-29 00:56:54.394260 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-29 00:56:54.394275 | orchestrator | Sunday 29 March 2026 00:51:38 +0000 (0:00:01.637) 0:00:54.849 ********** 2026-03-29 00:56:54.394279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.394310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.394315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.394319 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.394323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.394328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.394332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.394336 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.394347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-29 00:56:54.394376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-29 00:56:54.394386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-29 00:56:54.394392 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.394398 | orchestrator | 2026-03-29 00:56:54.394403 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-29 00:56:54.394409 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:01.500) 0:00:56.350 ********** 2026-03-29 00:56:54.394415 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 00:56:54.394423 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 00:56:54.394429 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-29 00:56:54.394435 | orchestrator | 2026-03-29 00:56:54.394443 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-29 00:56:54.394449 | orchestrator | Sunday 29 March 2026 00:51:41 +0000 (0:00:01.460) 0:00:57.811 ********** 2026-03-29 00:56:54.394456 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 00:56:54.394464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 00:56:54.394472 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-29 00:56:54.394478 | orchestrator | 2026-03-29 00:56:54.394484 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-29 00:56:54.394490 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:01.270) 0:00:59.081 ********** 2026-03-29 00:56:54.394497 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 00:56:54.394503 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 00:56:54.394509 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 00:56:54.394516 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 00:56:54.394522 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.394529 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 00:56:54.394535 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.394548 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 00:56:54.394553 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.394556 | orchestrator | 2026-03-29 00:56:54.394560 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-29 00:56:54.394564 | orchestrator | Sunday 29 March 2026 00:51:43 +0000 (0:00:00.700) 0:00:59.782 ********** 2026-03-29 00:56:54.394629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.394636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.394648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-29 00:56:54.394655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.394661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.394667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-29 00:56:54.394678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.394701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.394711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-29 00:56:54.394718 | orchestrator | 2026-03-29 00:56:54.394724 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-29 00:56:54.394731 | orchestrator | Sunday 29 March 2026 00:51:45 +0000 (0:00:02.645) 0:01:02.428 ********** 2026-03-29 00:56:54.394736 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.394740 | orchestrator | 2026-03-29 00:56:54.394744 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-29 00:56:54.394748 | orchestrator | Sunday 29 March 2026 00:51:46 +0000 (0:00:00.602) 0:01:03.030 ********** 2026-03-29 00:56:54.394753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 00:56:54.394759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.394767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 00:56:54.394788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.394792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-29 00:56:54.394804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.394816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394827 | orchestrator | 2026-03-29 00:56:54.394831 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-29 00:56:54.394835 | orchestrator | Sunday 29 March 2026 00:51:50 +0000 (0:00:03.972) 0:01:07.003 ********** 2026-03-29 00:56:54.394840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 00:56:54.394844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 00:56:54.394852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.394858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.394862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394877 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.394881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394892 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.394896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-29 00:56:54.394916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.394924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.394934 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.394938 | orchestrator | 2026-03-29 00:56:54.394942 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-29 00:56:54.394946 | orchestrator | Sunday 29 March 2026 00:51:51 +0000 (0:00:00.906) 0:01:07.909 ********** 2026-03-29 00:56:54.394951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:56:54.394957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:56:54.394961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:56:54.394971 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.394975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:56:54.394979 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.394983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:56:54.394987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-29 00:56:54.394991 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.394995 | orchestrator | 2026-03-29 00:56:54.394998 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-29 00:56:54.395002 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:01.050) 0:01:08.960 ********** 2026-03-29 00:56:54.395006 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.395010 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.395014 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.395018 | orchestrator | 2026-03-29 00:56:54.395022 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-29 00:56:54.395026 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:01.233) 0:01:10.193 ********** 2026-03-29 00:56:54.395030 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.395033 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.395037 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.395041 | orchestrator | 2026-03-29 00:56:54.395045 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-29 00:56:54.395049 | orchestrator | Sunday 29 March 2026 00:51:55 +0000 (0:00:02.090) 0:01:12.284 ********** 2026-03-29 00:56:54.395053 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.395056 | orchestrator | 2026-03-29 00:56:54.395060 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-29 00:56:54.395064 | orchestrator | Sunday 29 March 2026 00:51:56 +0000 (0:00:00.892) 0:01:13.176 ********** 2026-03-29 00:56:54.395072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.395080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.395096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.395115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395126 | orchestrator | 2026-03-29 00:56:54.395130 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-29 00:56:54.395134 | orchestrator | Sunday 29 March 2026 00:52:00 +0000 (0:00:04.326) 0:01:17.503 ********** 2026-03-29 00:56:54.395138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.395142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395155 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.395169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395178 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.395188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395200 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395204 | orchestrator | 2026-03-29 00:56:54.395207 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-29 00:56:54.395211 | orchestrator | Sunday 29 March 2026 00:52:01 +0000 (0:00:00.742) 0:01:18.246 ********** 2026-03-29 00:56:54.395219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395227 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395239 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395250 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395254 | orchestrator | 2026-03-29 00:56:54.395258 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-29 00:56:54.395262 | orchestrator | Sunday 29 March 2026 00:52:02 +0000 (0:00:01.109) 0:01:19.355 ********** 2026-03-29 00:56:54.395265 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.395269 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.395273 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.395277 | orchestrator | 2026-03-29 00:56:54.395280 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-29 00:56:54.395284 | orchestrator | Sunday 29 March 2026 00:52:03 +0000 (0:00:01.319) 0:01:20.675 ********** 2026-03-29 00:56:54.395288 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.395292 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.395295 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.395299 | orchestrator | 2026-03-29 00:56:54.395303 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-29 00:56:54.395307 | orchestrator | Sunday 29 March 2026 00:52:06 +0000 (0:00:02.168) 0:01:22.843 ********** 2026-03-29 00:56:54.395311 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395314 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395318 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395322 | orchestrator | 2026-03-29 00:56:54.395326 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-29 00:56:54.395329 | orchestrator | Sunday 29 March 2026 00:52:06 +0000 (0:00:00.318) 0:01:23.161 ********** 2026-03-29 00:56:54.395333 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.395340 | orchestrator | 2026-03-29 00:56:54.395344 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-29 00:56:54.395348 | orchestrator | Sunday 29 March 2026 00:52:07 +0000 (0:00:00.949) 0:01:24.111 ********** 2026-03-29 00:56:54.395355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 00:56:54.395364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 00:56:54.395368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-29 00:56:54.395372 | orchestrator | 2026-03-29 00:56:54.395376 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-29 00:56:54.395380 | orchestrator | Sunday 29 March 2026 00:52:10 +0000 (0:00:03.209) 0:01:27.320 ********** 2026-03-29 00:56:54.395384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 00:56:54.395388 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 00:56:54.395401 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-29 00:56:54.395412 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395416 | orchestrator | 2026-03-29 00:56:54.395420 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-29 00:56:54.395424 | orchestrator | Sunday 29 March 2026 00:52:12 +0000 (0:00:02.178) 0:01:29.498 ********** 2026-03-29 00:56:54.395432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:56:54.395438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:56:54.395444 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:56:54.395452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:56:54.395456 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:56:54.395469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-29 00:56:54.395473 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395477 | orchestrator | 2026-03-29 00:56:54.395480 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-29 00:56:54.395484 | orchestrator | Sunday 29 March 2026 00:52:14 +0000 (0:00:02.040) 0:01:31.538 ********** 2026-03-29 00:56:54.395488 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395492 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395496 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395499 | orchestrator | 2026-03-29 00:56:54.395504 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-29 00:56:54.395508 | orchestrator | Sunday 29 March 2026 00:52:15 +0000 (0:00:00.831) 0:01:32.370 ********** 2026-03-29 00:56:54.395511 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395515 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395519 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395523 | orchestrator | 2026-03-29 00:56:54.395527 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-29 00:56:54.395533 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:01.556) 0:01:33.927 ********** 2026-03-29 00:56:54.395538 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.395541 | orchestrator | 2026-03-29 00:56:54.395545 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-29 00:56:54.395549 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:00.712) 0:01:34.639 ********** 2026-03-29 00:56:54.395556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.395563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.395622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.395657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395686 | orchestrator | 2026-03-29 00:56:54.395691 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-29 00:56:54.395695 | orchestrator | Sunday 29 March 2026 00:52:22 +0000 (0:00:04.406) 0:01:39.046 ********** 2026-03-29 00:56:54.395699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.395706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395721 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.395732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395748 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.395759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.395779 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395783 | orchestrator | 2026-03-29 00:56:54.395787 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-29 00:56:54.395791 | orchestrator | Sunday 29 March 2026 00:52:24 +0000 (0:00:01.963) 0:01:41.010 ********** 2026-03-29 00:56:54.395795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395803 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.395807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395815 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.395819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-29 00:56:54.395827 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.395831 | orchestrator | 2026-03-29 00:56:54.395835 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-29 00:56:54.395839 | orchestrator | Sunday 29 March 2026 00:52:25 +0000 (0:00:01.426) 0:01:42.437 ********** 2026-03-29 00:56:54.395842 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.395846 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.395850 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.395854 | orchestrator | 2026-03-29 00:56:54.395857 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-29 00:56:54.395861 | orchestrator | Sunday 29 March 2026 00:52:27 +0000 (0:00:01.361) 0:01:43.798 ********** 2026-03-29 00:56:54.395865 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.395869 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.395873 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.395876 | orchestrator | 2026-03-29 00:56:54.396658 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-29 00:56:54.396685 | orchestrator | Sunday 29 March 2026 00:52:29 +0000 (0:00:01.916) 0:01:45.715 ********** 2026-03-29 00:56:54.396689 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.396693 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.396697 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.396709 | orchestrator | 2026-03-29 00:56:54.396713 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-29 00:56:54.396717 | orchestrator | Sunday 29 March 2026 00:52:29 +0000 (0:00:00.446) 0:01:46.161 ********** 2026-03-29 00:56:54.396721 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.396725 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.396729 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.396732 | orchestrator | 2026-03-29 00:56:54.396736 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-29 00:56:54.396740 | orchestrator | Sunday 29 March 2026 00:52:29 +0000 (0:00:00.312) 0:01:46.473 ********** 2026-03-29 00:56:54.396744 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.396747 | orchestrator | 2026-03-29 00:56:54.396757 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-29 00:56:54.396760 | orchestrator | Sunday 29 March 2026 00:52:30 +0000 (0:00:00.835) 0:01:47.309 ********** 2026-03-29 00:56:54.396765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 00:56:54.396771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:56:54.396776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 00:56:54.396781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:56:54.396801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 00:56:54.396830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:56:54.396855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396899 | orchestrator | 2026-03-29 00:56:54.396903 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-29 00:56:54.396907 | orchestrator | Sunday 29 March 2026 00:52:36 +0000 (0:00:05.926) 0:01:53.235 ********** 2026-03-29 00:56:54.396911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 00:56:54.396921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 00:56:54.396928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:56:54.396932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:56:54.396936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396976 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.396980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.396994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 00:56:54.396998 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 00:56:54.397009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.397013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.397017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.397024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.397031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.397035 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397039 | orchestrator | 2026-03-29 00:56:54.397042 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-29 00:56:54.397046 | orchestrator | Sunday 29 March 2026 00:52:37 +0000 (0:00:00.985) 0:01:54.221 ********** 2026-03-29 00:56:54.397051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:56:54.397057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:56:54.397061 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:56:54.397071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:56:54.397075 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:56:54.397082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-29 00:56:54.397086 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397090 | orchestrator | 2026-03-29 00:56:54.397094 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-29 00:56:54.397098 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:01.296) 0:01:55.518 ********** 2026-03-29 00:56:54.397101 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.397105 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.397109 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.397113 | orchestrator | 2026-03-29 00:56:54.397116 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-29 00:56:54.397120 | orchestrator | Sunday 29 March 2026 00:52:40 +0000 (0:00:02.051) 0:01:57.570 ********** 2026-03-29 00:56:54.397127 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.397130 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.397134 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.397138 | orchestrator | 2026-03-29 00:56:54.397141 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-29 00:56:54.397145 | orchestrator | Sunday 29 March 2026 00:52:42 +0000 (0:00:01.760) 0:01:59.330 ********** 2026-03-29 00:56:54.397149 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397153 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397156 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397160 | orchestrator | 2026-03-29 00:56:54.397164 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-29 00:56:54.397168 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.440) 0:01:59.770 ********** 2026-03-29 00:56:54.397171 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.397175 | orchestrator | 2026-03-29 00:56:54.397179 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-29 00:56:54.397183 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.751) 0:02:00.522 ********** 2026-03-29 00:56:54.397191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 00:56:54.397199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.397207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 00:56:54.397217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.397240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 00:56:54.397250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.397260 | orchestrator | 2026-03-29 00:56:54.397264 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-29 00:56:54.397268 | orchestrator | Sunday 29 March 2026 00:52:47 +0000 (0:00:03.983) 0:02:04.505 ********** 2026-03-29 00:56:54.397272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 00:56:54.397282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.397287 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 00:56:54.397303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 00:56:54.397311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.397322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.397327 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397331 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397336 | orchestrator | 2026-03-29 00:56:54.397340 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-29 00:56:54.397344 | orchestrator | Sunday 29 March 2026 00:52:50 +0000 (0:00:03.101) 0:02:07.606 ********** 2026-03-29 00:56:54.397349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:56:54.397356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:56:54.397368 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:56:54.397381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:56:54.397387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:56:54.397393 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-29 00:56:54.397405 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397411 | orchestrator | 2026-03-29 00:56:54.397417 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-29 00:56:54.397422 | orchestrator | Sunday 29 March 2026 00:52:54 +0000 (0:00:03.213) 0:02:10.820 ********** 2026-03-29 00:56:54.397428 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.397434 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.397439 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.397446 | orchestrator | 2026-03-29 00:56:54.397456 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-29 00:56:54.397467 | orchestrator | Sunday 29 March 2026 00:52:55 +0000 (0:00:01.294) 0:02:12.115 ********** 2026-03-29 00:56:54.397474 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.397481 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.397487 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.397493 | orchestrator | 2026-03-29 00:56:54.397510 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-29 00:56:54.397520 | orchestrator | Sunday 29 March 2026 00:52:57 +0000 (0:00:02.009) 0:02:14.124 ********** 2026-03-29 00:56:54.397535 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397543 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397552 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397561 | orchestrator | 2026-03-29 00:56:54.397569 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-29 00:56:54.397624 | orchestrator | Sunday 29 March 2026 00:52:57 +0000 (0:00:00.540) 0:02:14.665 ********** 2026-03-29 00:56:54.397631 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.397637 | orchestrator | 2026-03-29 00:56:54.397644 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-29 00:56:54.397651 | orchestrator | Sunday 29 March 2026 00:52:58 +0000 (0:00:00.861) 0:02:15.527 ********** 2026-03-29 00:56:54.397663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 00:56:54.397671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 00:56:54.397678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 00:56:54.397685 | orchestrator | 2026-03-29 00:56:54.397691 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-29 00:56:54.397697 | orchestrator | Sunday 29 March 2026 00:53:01 +0000 (0:00:03.165) 0:02:18.692 ********** 2026-03-29 00:56:54.397703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 00:56:54.397713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 00:56:54.397725 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397731 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 00:56:54.397748 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397755 | orchestrator | 2026-03-29 00:56:54.397761 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-29 00:56:54.397768 | orchestrator | Sunday 29 March 2026 00:53:02 +0000 (0:00:00.633) 0:02:19.326 ********** 2026-03-29 00:56:54.397774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:56:54.397780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:56:54.397786 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:56:54.397797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:56:54.397803 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:56:54.397815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-29 00:56:54.397821 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397827 | orchestrator | 2026-03-29 00:56:54.397834 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-29 00:56:54.397838 | orchestrator | Sunday 29 March 2026 00:53:03 +0000 (0:00:00.674) 0:02:20.000 ********** 2026-03-29 00:56:54.397842 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.397845 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.397849 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.397853 | orchestrator | 2026-03-29 00:56:54.397857 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-29 00:56:54.397861 | orchestrator | Sunday 29 March 2026 00:53:04 +0000 (0:00:01.191) 0:02:21.192 ********** 2026-03-29 00:56:54.397864 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.397868 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.397876 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.397880 | orchestrator | 2026-03-29 00:56:54.397884 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-29 00:56:54.397888 | orchestrator | Sunday 29 March 2026 00:53:06 +0000 (0:00:01.783) 0:02:22.975 ********** 2026-03-29 00:56:54.397891 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397895 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.397899 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.397903 | orchestrator | 2026-03-29 00:56:54.397906 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-29 00:56:54.397910 | orchestrator | Sunday 29 March 2026 00:53:06 +0000 (0:00:00.417) 0:02:23.393 ********** 2026-03-29 00:56:54.397914 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.397918 | orchestrator | 2026-03-29 00:56:54.397922 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-29 00:56:54.397925 | orchestrator | Sunday 29 March 2026 00:53:07 +0000 (0:00:00.840) 0:02:24.233 ********** 2026-03-29 00:56:54.397938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:56:54.397944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:56:54.397960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 00:56:54.397965 | orchestrator | 2026-03-29 00:56:54.397969 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-29 00:56:54.397972 | orchestrator | Sunday 29 March 2026 00:53:11 +0000 (0:00:03.628) 0:02:27.862 ********** 2026-03-29 00:56:54.397980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:56:54.397990 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.397997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:56:54.398004 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 00:56:54.398041 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398045 | orchestrator | 2026-03-29 00:56:54.398049 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-29 00:56:54.398053 | orchestrator | Sunday 29 March 2026 00:53:12 +0000 (0:00:01.360) 0:02:29.223 ********** 2026-03-29 00:56:54.398063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:56:54.398072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:56:54.398081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:56:54.398091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:56:54.398105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 00:56:54.398111 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:56:54.398123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:56:54.398129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:56:54.398135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:56:54.398141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 00:56:54.398146 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:56:54.398161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:56:54.398167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-29 00:56:54.398177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-29 00:56:54.398183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-29 00:56:54.398190 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398195 | orchestrator | 2026-03-29 00:56:54.398201 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-29 00:56:54.398207 | orchestrator | Sunday 29 March 2026 00:53:13 +0000 (0:00:01.080) 0:02:30.304 ********** 2026-03-29 00:56:54.398218 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.398224 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.398230 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.398235 | orchestrator | 2026-03-29 00:56:54.398241 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-29 00:56:54.398247 | orchestrator | Sunday 29 March 2026 00:53:14 +0000 (0:00:01.243) 0:02:31.547 ********** 2026-03-29 00:56:54.398253 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.398257 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.398261 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.398264 | orchestrator | 2026-03-29 00:56:54.398268 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-29 00:56:54.398272 | orchestrator | Sunday 29 March 2026 00:53:17 +0000 (0:00:02.157) 0:02:33.705 ********** 2026-03-29 00:56:54.398276 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398279 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398283 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398287 | orchestrator | 2026-03-29 00:56:54.398291 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-29 00:56:54.398295 | orchestrator | Sunday 29 March 2026 00:53:17 +0000 (0:00:00.295) 0:02:34.001 ********** 2026-03-29 00:56:54.398298 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398302 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398306 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398309 | orchestrator | 2026-03-29 00:56:54.398313 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-29 00:56:54.398317 | orchestrator | Sunday 29 March 2026 00:53:17 +0000 (0:00:00.511) 0:02:34.513 ********** 2026-03-29 00:56:54.398321 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.398324 | orchestrator | 2026-03-29 00:56:54.398328 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-29 00:56:54.398332 | orchestrator | Sunday 29 March 2026 00:53:18 +0000 (0:00:01.001) 0:02:35.514 ********** 2026-03-29 00:56:54.398336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 00:56:54.398345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:56:54.398353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 00:56:54.398363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:56:54.398367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:56:54.398371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:56:54.398379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 00:56:54.398386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:56:54.398394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:56:54.398398 | orchestrator | 2026-03-29 00:56:54.398402 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-29 00:56:54.398406 | orchestrator | Sunday 29 March 2026 00:53:22 +0000 (0:00:03.525) 0:02:39.040 ********** 2026-03-29 00:56:54.398410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 00:56:54.398414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:56:54.398418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:56:54.398422 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 00:56:54.398446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:56:54.398450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:56:54.398454 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 00:56:54.398462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 00:56:54.398471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 00:56:54.398480 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398484 | orchestrator | 2026-03-29 00:56:54.398488 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-29 00:56:54.398492 | orchestrator | Sunday 29 March 2026 00:53:23 +0000 (0:00:00.934) 0:02:39.974 ********** 2026-03-29 00:56:54.398499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:56:54.398503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:56:54.398507 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:56:54.398515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:56:54.398519 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:56:54.398526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-29 00:56:54.398530 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398534 | orchestrator | 2026-03-29 00:56:54.398538 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-29 00:56:54.398541 | orchestrator | Sunday 29 March 2026 00:53:24 +0000 (0:00:01.139) 0:02:41.114 ********** 2026-03-29 00:56:54.398545 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.398549 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.398553 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.398556 | orchestrator | 2026-03-29 00:56:54.398560 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-29 00:56:54.398564 | orchestrator | Sunday 29 March 2026 00:53:25 +0000 (0:00:01.292) 0:02:42.406 ********** 2026-03-29 00:56:54.398568 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.398571 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.398618 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.398623 | orchestrator | 2026-03-29 00:56:54.398627 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-29 00:56:54.398630 | orchestrator | Sunday 29 March 2026 00:53:27 +0000 (0:00:02.093) 0:02:44.499 ********** 2026-03-29 00:56:54.398634 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398638 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398642 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398646 | orchestrator | 2026-03-29 00:56:54.398655 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-29 00:56:54.398659 | orchestrator | Sunday 29 March 2026 00:53:28 +0000 (0:00:00.534) 0:02:45.034 ********** 2026-03-29 00:56:54.398663 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.398667 | orchestrator | 2026-03-29 00:56:54.398671 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-29 00:56:54.398675 | orchestrator | Sunday 29 March 2026 00:53:29 +0000 (0:00:01.016) 0:02:46.051 ********** 2026-03-29 00:56:54.398682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 00:56:54.398688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.398707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 00:56:54.398712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 00:56:54.398720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.398731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.398741 | orchestrator | 2026-03-29 00:56:54.398749 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-29 00:56:54.398755 | orchestrator | Sunday 29 March 2026 00:53:32 +0000 (0:00:03.434) 0:02:49.486 ********** 2026-03-29 00:56:54.398767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 00:56:54.398773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.398779 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 00:56:54.398797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.398805 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 00:56:54.398828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.398835 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398841 | orchestrator | 2026-03-29 00:56:54.398847 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-29 00:56:54.398853 | orchestrator | Sunday 29 March 2026 00:53:33 +0000 (0:00:00.922) 0:02:50.408 ********** 2026-03-29 00:56:54.398859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:56:54.398866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:56:54.398872 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.398876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:56:54.398885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:56:54.398889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:56:54.398893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-29 00:56:54.398897 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.398901 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.398905 | orchestrator | 2026-03-29 00:56:54.398909 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-29 00:56:54.398912 | orchestrator | Sunday 29 March 2026 00:53:34 +0000 (0:00:00.925) 0:02:51.333 ********** 2026-03-29 00:56:54.398916 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.398920 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.398924 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.398928 | orchestrator | 2026-03-29 00:56:54.398932 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-29 00:56:54.398935 | orchestrator | Sunday 29 March 2026 00:53:35 +0000 (0:00:01.256) 0:02:52.589 ********** 2026-03-29 00:56:54.398939 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.398943 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.398947 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.398951 | orchestrator | 2026-03-29 00:56:54.398955 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-29 00:56:54.398959 | orchestrator | Sunday 29 March 2026 00:53:38 +0000 (0:00:02.220) 0:02:54.810 ********** 2026-03-29 00:56:54.398962 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.398966 | orchestrator | 2026-03-29 00:56:54.398970 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-29 00:56:54.398974 | orchestrator | Sunday 29 March 2026 00:53:39 +0000 (0:00:01.426) 0:02:56.237 ********** 2026-03-29 00:56:54.398981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 00:56:54.398990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.398994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 00:56:54.399014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-29 00:56:54.399037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399051 | orchestrator | 2026-03-29 00:56:54.399056 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-29 00:56:54.399060 | orchestrator | Sunday 29 March 2026 00:53:43 +0000 (0:00:03.870) 0:03:00.107 ********** 2026-03-29 00:56:54.399066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 00:56:54.399071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399087 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 00:56:54.399097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399116 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-29 00:56:54.399127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.399154 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399160 | orchestrator | 2026-03-29 00:56:54.399166 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-29 00:56:54.399173 | orchestrator | Sunday 29 March 2026 00:53:44 +0000 (0:00:00.697) 0:03:00.805 ********** 2026-03-29 00:56:54.399179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:56:54.399197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:56:54.399203 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:56:54.399215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:56:54.399220 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:56:54.399232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-29 00:56:54.399237 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399242 | orchestrator | 2026-03-29 00:56:54.399248 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-29 00:56:54.399253 | orchestrator | Sunday 29 March 2026 00:53:45 +0000 (0:00:01.216) 0:03:02.021 ********** 2026-03-29 00:56:54.399258 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.399263 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.399269 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.399275 | orchestrator | 2026-03-29 00:56:54.399281 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-29 00:56:54.399287 | orchestrator | Sunday 29 March 2026 00:53:46 +0000 (0:00:01.347) 0:03:03.369 ********** 2026-03-29 00:56:54.399292 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.399298 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.399304 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.399309 | orchestrator | 2026-03-29 00:56:54.399315 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-29 00:56:54.399320 | orchestrator | Sunday 29 March 2026 00:53:48 +0000 (0:00:01.991) 0:03:05.360 ********** 2026-03-29 00:56:54.399325 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.399332 | orchestrator | 2026-03-29 00:56:54.399338 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-29 00:56:54.399345 | orchestrator | Sunday 29 March 2026 00:53:49 +0000 (0:00:01.296) 0:03:06.657 ********** 2026-03-29 00:56:54.399351 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:56:54.399357 | orchestrator | 2026-03-29 00:56:54.399363 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-29 00:56:54.399369 | orchestrator | Sunday 29 March 2026 00:53:53 +0000 (0:00:03.116) 0:03:09.774 ********** 2026-03-29 00:56:54.399381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:56:54.399399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:56:54.399406 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:56:54.399419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:56:54.399431 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:56:54.399453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:56:54.399459 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399465 | orchestrator | 2026-03-29 00:56:54.399472 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-29 00:56:54.399479 | orchestrator | Sunday 29 March 2026 00:53:55 +0000 (0:00:02.269) 0:03:12.044 ********** 2026-03-29 00:56:54.399487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:56:54.399499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:56:54.399503 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:56:54.399512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:56:54.399519 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:56:54.399534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-29 00:56:54.399538 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399542 | orchestrator | 2026-03-29 00:56:54.399546 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-29 00:56:54.399550 | orchestrator | Sunday 29 March 2026 00:53:57 +0000 (0:00:02.539) 0:03:14.583 ********** 2026-03-29 00:56:54.399555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:56:54.399559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:56:54.399567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:56:54.399571 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:56:54.399613 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:56:54.399624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-29 00:56:54.399628 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399632 | orchestrator | 2026-03-29 00:56:54.399637 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-29 00:56:54.399641 | orchestrator | Sunday 29 March 2026 00:54:00 +0000 (0:00:02.797) 0:03:17.381 ********** 2026-03-29 00:56:54.399645 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.399649 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.399653 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.399657 | orchestrator | 2026-03-29 00:56:54.399661 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-29 00:56:54.399665 | orchestrator | Sunday 29 March 2026 00:54:02 +0000 (0:00:01.776) 0:03:19.157 ********** 2026-03-29 00:56:54.399668 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399673 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399677 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399681 | orchestrator | 2026-03-29 00:56:54.399685 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-29 00:56:54.399689 | orchestrator | Sunday 29 March 2026 00:54:03 +0000 (0:00:01.423) 0:03:20.580 ********** 2026-03-29 00:56:54.399693 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399697 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399701 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399710 | orchestrator | 2026-03-29 00:56:54.399714 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-29 00:56:54.399718 | orchestrator | Sunday 29 March 2026 00:54:04 +0000 (0:00:00.327) 0:03:20.908 ********** 2026-03-29 00:56:54.399722 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.399726 | orchestrator | 2026-03-29 00:56:54.399730 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-29 00:56:54.399734 | orchestrator | Sunday 29 March 2026 00:54:05 +0000 (0:00:01.419) 0:03:22.328 ********** 2026-03-29 00:56:54.399738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 00:56:54.399747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 00:56:54.399755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-29 00:56:54.399760 | orchestrator | 2026-03-29 00:56:54.399765 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-29 00:56:54.399769 | orchestrator | Sunday 29 March 2026 00:54:07 +0000 (0:00:01.520) 0:03:23.849 ********** 2026-03-29 00:56:54.399773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 00:56:54.399782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 00:56:54.399786 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399790 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-29 00:56:54.399798 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399802 | orchestrator | 2026-03-29 00:56:54.399806 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-29 00:56:54.399810 | orchestrator | Sunday 29 March 2026 00:54:07 +0000 (0:00:00.430) 0:03:24.279 ********** 2026-03-29 00:56:54.399815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 00:56:54.399819 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 00:56:54.399964 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.399969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-29 00:56:54.399973 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.399977 | orchestrator | 2026-03-29 00:56:54.399982 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-29 00:56:54.399985 | orchestrator | Sunday 29 March 2026 00:54:08 +0000 (0:00:00.875) 0:03:25.154 ********** 2026-03-29 00:56:54.399990 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.399994 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.400005 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.400009 | orchestrator | 2026-03-29 00:56:54.400013 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-29 00:56:54.400017 | orchestrator | Sunday 29 March 2026 00:54:08 +0000 (0:00:00.498) 0:03:25.653 ********** 2026-03-29 00:56:54.400021 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.400025 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.400029 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.400038 | orchestrator | 2026-03-29 00:56:54.400042 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-29 00:56:54.400046 | orchestrator | Sunday 29 March 2026 00:54:10 +0000 (0:00:01.252) 0:03:26.905 ********** 2026-03-29 00:56:54.400050 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.400054 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.400058 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.400062 | orchestrator | 2026-03-29 00:56:54.400066 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-29 00:56:54.400069 | orchestrator | Sunday 29 March 2026 00:54:10 +0000 (0:00:00.332) 0:03:27.238 ********** 2026-03-29 00:56:54.400073 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.400077 | orchestrator | 2026-03-29 00:56:54.400081 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-29 00:56:54.400085 | orchestrator | Sunday 29 March 2026 00:54:11 +0000 (0:00:01.440) 0:03:28.679 ********** 2026-03-29 00:56:54.400090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 00:56:54.400095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:56:54.400155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.400218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 00:56:54.400236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.400271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.400280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:56:54.400342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.400395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 00:56:54.400408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.400477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.400485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:56:54.400522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.400555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.400690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.400694 | orchestrator | 2026-03-29 00:56:54.400698 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-29 00:56:54.400702 | orchestrator | Sunday 29 March 2026 00:54:16 +0000 (0:00:04.186) 0:03:32.865 ********** 2026-03-29 00:56:54.400707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 00:56:54.400746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:56:54.400776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 00:56:54.400851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.400873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.400931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:56:54.400945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.400959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 00:56:54.401079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.401130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.401202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401216 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.401220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.401253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-29 00:56:54.401262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.401338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.401347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.401351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401355 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.401372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-29 00:56:54.401387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-29 00:56:54.401409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-29 00:56:54.401419 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.401425 | orchestrator | 2026-03-29 00:56:54.401431 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-29 00:56:54.401438 | orchestrator | Sunday 29 March 2026 00:54:17 +0000 (0:00:01.309) 0:03:34.175 ********** 2026-03-29 00:56:54.401446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:56:54.401452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:56:54.401459 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.401485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:56:54.401492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:56:54.401499 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.401505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:56:54.401511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-29 00:56:54.401517 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.401522 | orchestrator | 2026-03-29 00:56:54.401528 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-29 00:56:54.401538 | orchestrator | Sunday 29 March 2026 00:54:19 +0000 (0:00:02.121) 0:03:36.297 ********** 2026-03-29 00:56:54.401545 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.401551 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.401557 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.401564 | orchestrator | 2026-03-29 00:56:54.401569 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-29 00:56:54.401572 | orchestrator | Sunday 29 March 2026 00:54:20 +0000 (0:00:01.332) 0:03:37.630 ********** 2026-03-29 00:56:54.401600 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.401604 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.401613 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.401617 | orchestrator | 2026-03-29 00:56:54.401621 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-29 00:56:54.401625 | orchestrator | Sunday 29 March 2026 00:54:22 +0000 (0:00:02.057) 0:03:39.687 ********** 2026-03-29 00:56:54.401629 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.401634 | orchestrator | 2026-03-29 00:56:54.401638 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-29 00:56:54.401642 | orchestrator | Sunday 29 March 2026 00:54:24 +0000 (0:00:01.259) 0:03:40.947 ********** 2026-03-29 00:56:54.401646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.401651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.401674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.401679 | orchestrator | 2026-03-29 00:56:54.401683 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-29 00:56:54.401688 | orchestrator | Sunday 29 March 2026 00:54:27 +0000 (0:00:03.654) 0:03:44.601 ********** 2026-03-29 00:56:54.401695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.401706 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.401710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.401714 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.401718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.401722 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.401727 | orchestrator | 2026-03-29 00:56:54.401730 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-29 00:56:54.401734 | orchestrator | Sunday 29 March 2026 00:54:28 +0000 (0:00:00.512) 0:03:45.113 ********** 2026-03-29 00:56:54.401738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:56:54.401743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:56:54.401748 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.401765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:56:54.401771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:56:54.401775 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.401781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:56:54.401788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-29 00:56:54.401793 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.401797 | orchestrator | 2026-03-29 00:56:54.401801 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-29 00:56:54.401805 | orchestrator | Sunday 29 March 2026 00:54:29 +0000 (0:00:00.659) 0:03:45.773 ********** 2026-03-29 00:56:54.401808 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.401812 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.401816 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.401820 | orchestrator | 2026-03-29 00:56:54.401824 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-29 00:56:54.401828 | orchestrator | Sunday 29 March 2026 00:54:30 +0000 (0:00:01.614) 0:03:47.388 ********** 2026-03-29 00:56:54.401831 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.401835 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.401839 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.401843 | orchestrator | 2026-03-29 00:56:54.401847 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-29 00:56:54.401851 | orchestrator | Sunday 29 March 2026 00:54:32 +0000 (0:00:01.618) 0:03:49.007 ********** 2026-03-29 00:56:54.401855 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.401858 | orchestrator | 2026-03-29 00:56:54.401862 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-29 00:56:54.401866 | orchestrator | Sunday 29 March 2026 00:54:33 +0000 (0:00:01.373) 0:03:50.381 ********** 2026-03-29 00:56:54.401871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.401876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.401910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.401938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401949 | orchestrator | 2026-03-29 00:56:54.401954 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-29 00:56:54.401958 | orchestrator | Sunday 29 March 2026 00:54:37 +0000 (0:00:03.780) 0:03:54.162 ********** 2026-03-29 00:56:54.401963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.401968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.401979 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.402007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.402042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.402048 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.402061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.402081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.402086 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402091 | orchestrator | 2026-03-29 00:56:54.402095 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-29 00:56:54.402100 | orchestrator | Sunday 29 March 2026 00:54:38 +0000 (0:00:00.970) 0:03:55.132 ********** 2026-03-29 00:56:54.402107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402126 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402148 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-29 00:56:54.402173 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402178 | orchestrator | 2026-03-29 00:56:54.402182 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-29 00:56:54.402186 | orchestrator | Sunday 29 March 2026 00:54:39 +0000 (0:00:00.852) 0:03:55.985 ********** 2026-03-29 00:56:54.402191 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.402195 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.402199 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.402203 | orchestrator | 2026-03-29 00:56:54.402208 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-29 00:56:54.402212 | orchestrator | Sunday 29 March 2026 00:54:40 +0000 (0:00:01.309) 0:03:57.294 ********** 2026-03-29 00:56:54.402216 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.402221 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.402225 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.402229 | orchestrator | 2026-03-29 00:56:54.402247 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-29 00:56:54.402252 | orchestrator | Sunday 29 March 2026 00:54:42 +0000 (0:00:01.885) 0:03:59.180 ********** 2026-03-29 00:56:54.402256 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.402261 | orchestrator | 2026-03-29 00:56:54.402265 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-29 00:56:54.402270 | orchestrator | Sunday 29 March 2026 00:54:43 +0000 (0:00:01.406) 0:04:00.586 ********** 2026-03-29 00:56:54.402274 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-29 00:56:54.402279 | orchestrator | 2026-03-29 00:56:54.402283 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-29 00:56:54.402288 | orchestrator | Sunday 29 March 2026 00:54:44 +0000 (0:00:00.779) 0:04:01.366 ********** 2026-03-29 00:56:54.402295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 00:56:54.402300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 00:56:54.402305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-29 00:56:54.402313 | orchestrator | 2026-03-29 00:56:54.402318 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-29 00:56:54.402322 | orchestrator | Sunday 29 March 2026 00:54:48 +0000 (0:00:04.006) 0:04:05.372 ********** 2026-03-29 00:56:54.402327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402332 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402340 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402348 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402351 | orchestrator | 2026-03-29 00:56:54.402368 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-29 00:56:54.402372 | orchestrator | Sunday 29 March 2026 00:54:49 +0000 (0:00:01.012) 0:04:06.385 ********** 2026-03-29 00:56:54.402376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:56:54.402380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:56:54.402385 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:56:54.402396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:56:54.402400 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:56:54.402414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-29 00:56:54.402418 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402422 | orchestrator | 2026-03-29 00:56:54.402426 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 00:56:54.402430 | orchestrator | Sunday 29 March 2026 00:54:51 +0000 (0:00:01.528) 0:04:07.914 ********** 2026-03-29 00:56:54.402434 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.402438 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.402441 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.402445 | orchestrator | 2026-03-29 00:56:54.402449 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 00:56:54.402453 | orchestrator | Sunday 29 March 2026 00:54:53 +0000 (0:00:02.464) 0:04:10.379 ********** 2026-03-29 00:56:54.402457 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.402461 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.402465 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.402468 | orchestrator | 2026-03-29 00:56:54.402472 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-29 00:56:54.402476 | orchestrator | Sunday 29 March 2026 00:54:56 +0000 (0:00:02.828) 0:04:13.207 ********** 2026-03-29 00:56:54.402481 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-29 00:56:54.402484 | orchestrator | 2026-03-29 00:56:54.402488 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-29 00:56:54.402492 | orchestrator | Sunday 29 March 2026 00:54:57 +0000 (0:00:01.240) 0:04:14.448 ********** 2026-03-29 00:56:54.402496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402500 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402508 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402531 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402535 | orchestrator | 2026-03-29 00:56:54.402539 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-29 00:56:54.402543 | orchestrator | Sunday 29 March 2026 00:54:58 +0000 (0:00:01.163) 0:04:15.612 ********** 2026-03-29 00:56:54.402554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402566 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-29 00:56:54.402618 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402624 | orchestrator | 2026-03-29 00:56:54.402628 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-29 00:56:54.402632 | orchestrator | Sunday 29 March 2026 00:55:00 +0000 (0:00:01.246) 0:04:16.858 ********** 2026-03-29 00:56:54.402636 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402640 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402644 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402648 | orchestrator | 2026-03-29 00:56:54.402652 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 00:56:54.402656 | orchestrator | Sunday 29 March 2026 00:55:01 +0000 (0:00:01.594) 0:04:18.453 ********** 2026-03-29 00:56:54.402660 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.402665 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.402669 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.402673 | orchestrator | 2026-03-29 00:56:54.402677 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 00:56:54.402681 | orchestrator | Sunday 29 March 2026 00:55:03 +0000 (0:00:02.166) 0:04:20.619 ********** 2026-03-29 00:56:54.402685 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.402688 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.402692 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.402696 | orchestrator | 2026-03-29 00:56:54.402700 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-29 00:56:54.402704 | orchestrator | Sunday 29 March 2026 00:55:06 +0000 (0:00:02.716) 0:04:23.335 ********** 2026-03-29 00:56:54.402708 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-29 00:56:54.402712 | orchestrator | 2026-03-29 00:56:54.402717 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-29 00:56:54.402721 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.839) 0:04:24.174 ********** 2026-03-29 00:56:54.402743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:56:54.402752 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:56:54.402760 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:56:54.402772 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402776 | orchestrator | 2026-03-29 00:56:54.402779 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-29 00:56:54.402783 | orchestrator | Sunday 29 March 2026 00:55:08 +0000 (0:00:01.265) 0:04:25.440 ********** 2026-03-29 00:56:54.402787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:56:54.402791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:56:54.402799 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-29 00:56:54.402807 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402811 | orchestrator | 2026-03-29 00:56:54.402815 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-29 00:56:54.402823 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:01.315) 0:04:26.756 ********** 2026-03-29 00:56:54.402827 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.402831 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.402834 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.402838 | orchestrator | 2026-03-29 00:56:54.402842 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-29 00:56:54.402848 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:01.367) 0:04:28.123 ********** 2026-03-29 00:56:54.402854 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.402860 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.402866 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.402877 | orchestrator | 2026-03-29 00:56:54.402884 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-29 00:56:54.402889 | orchestrator | Sunday 29 March 2026 00:55:13 +0000 (0:00:02.243) 0:04:30.367 ********** 2026-03-29 00:56:54.402896 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.402901 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.402908 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.402914 | orchestrator | 2026-03-29 00:56:54.402920 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-29 00:56:54.402925 | orchestrator | Sunday 29 March 2026 00:55:16 +0000 (0:00:02.897) 0:04:33.264 ********** 2026-03-29 00:56:54.402954 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.402962 | orchestrator | 2026-03-29 00:56:54.402968 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-29 00:56:54.402972 | orchestrator | Sunday 29 March 2026 00:55:18 +0000 (0:00:01.492) 0:04:34.757 ********** 2026-03-29 00:56:54.402980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.402985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:56:54.402989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.402995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.403021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.403029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.403033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:56:54.403037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:56:54.403044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.403082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.403086 | orchestrator | 2026-03-29 00:56:54.403090 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-29 00:56:54.403098 | orchestrator | Sunday 29 March 2026 00:55:21 +0000 (0:00:03.287) 0:04:38.045 ********** 2026-03-29 00:56:54.403102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.403106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:56:54.403123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.403138 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.403142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.403149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:56:54.403153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.403177 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.403184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.403193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 00:56:54.403197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 00:56:54.403216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 00:56:54.403221 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.403225 | orchestrator | 2026-03-29 00:56:54.403229 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-29 00:56:54.403232 | orchestrator | Sunday 29 March 2026 00:55:21 +0000 (0:00:00.617) 0:04:38.662 ********** 2026-03-29 00:56:54.403236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:56:54.403241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:56:54.403245 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.403251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:56:54.403255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:56:54.403259 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.403263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:56:54.403270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-29 00:56:54.403275 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.403281 | orchestrator | 2026-03-29 00:56:54.403290 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-29 00:56:54.403300 | orchestrator | Sunday 29 March 2026 00:55:23 +0000 (0:00:01.221) 0:04:39.884 ********** 2026-03-29 00:56:54.403307 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.403314 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.403321 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.403327 | orchestrator | 2026-03-29 00:56:54.403334 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-29 00:56:54.403340 | orchestrator | Sunday 29 March 2026 00:55:24 +0000 (0:00:01.350) 0:04:41.234 ********** 2026-03-29 00:56:54.403347 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.403353 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.403360 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.403366 | orchestrator | 2026-03-29 00:56:54.403371 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-29 00:56:54.403378 | orchestrator | Sunday 29 March 2026 00:55:26 +0000 (0:00:01.983) 0:04:43.218 ********** 2026-03-29 00:56:54.403383 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.403389 | orchestrator | 2026-03-29 00:56:54.403395 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-29 00:56:54.403401 | orchestrator | Sunday 29 March 2026 00:55:27 +0000 (0:00:01.273) 0:04:44.491 ********** 2026-03-29 00:56:54.403408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:56:54.403438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:56:54.403450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:56:54.403463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:56:54.403471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:56:54.403495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:56:54.403502 | orchestrator | 2026-03-29 00:56:54.403506 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-29 00:56:54.403510 | orchestrator | Sunday 29 March 2026 00:55:33 +0000 (0:00:05.206) 0:04:49.698 ********** 2026-03-29 00:56:54.403523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:56:54.403528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:56:54.403533 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.403537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:56:54.403554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:56:54.403559 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.403570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:56:54.403597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:56:54.403602 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.403606 | orchestrator | 2026-03-29 00:56:54.403610 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-29 00:56:54.403614 | orchestrator | Sunday 29 March 2026 00:55:33 +0000 (0:00:00.593) 0:04:50.291 ********** 2026-03-29 00:56:54.403618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 00:56:54.403622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:56:54.403626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:56:54.403631 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.403635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 00:56:54.403639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:56:54.403643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:56:54.403647 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.403651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-29 00:56:54.403675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:56:54.403679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-29 00:56:54.403683 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.403687 | orchestrator | 2026-03-29 00:56:54.403691 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-29 00:56:54.403695 | orchestrator | Sunday 29 March 2026 00:55:34 +0000 (0:00:00.893) 0:04:51.185 ********** 2026-03-29 00:56:54.403699 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.403702 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.403706 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.403710 | orchestrator | 2026-03-29 00:56:54.403714 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-29 00:56:54.403721 | orchestrator | Sunday 29 March 2026 00:55:35 +0000 (0:00:00.757) 0:04:51.943 ********** 2026-03-29 00:56:54.403725 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.403728 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.403732 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.403736 | orchestrator | 2026-03-29 00:56:54.403740 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-29 00:56:54.403744 | orchestrator | Sunday 29 March 2026 00:55:36 +0000 (0:00:01.199) 0:04:53.142 ********** 2026-03-29 00:56:54.403747 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.403751 | orchestrator | 2026-03-29 00:56:54.403756 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-29 00:56:54.403759 | orchestrator | Sunday 29 March 2026 00:55:38 +0000 (0:00:01.656) 0:04:54.799 ********** 2026-03-29 00:56:54.403764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 00:56:54.403768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:56:54.403773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.403807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 00:56:54.403811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:56:54.403815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.403830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 00:56:54.403846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:56:54.403851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.403866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 00:56:54.403876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:56:54.403883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.403895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 00:56:54.403900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:56:54.403907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.403946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 00:56:54.403951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:56:54.403959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.403967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.403970 | orchestrator | 2026-03-29 00:56:54.403977 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-29 00:56:54.403981 | orchestrator | Sunday 29 March 2026 00:55:42 +0000 (0:00:04.565) 0:04:59.365 ********** 2026-03-29 00:56:54.403991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 00:56:54.403999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:56:54.404008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.404033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 00:56:54.404043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:56:54.404052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.404081 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 00:56:54.404093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:56:54.404103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.404125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 00:56:54.404137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:56:54.404144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.404166 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 00:56:54.404181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 00:56:54.404192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.404214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 00:56:54.404224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-29 00:56:54.404230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 00:56:54.404249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 00:56:54.404255 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404261 | orchestrator | 2026-03-29 00:56:54.404268 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-29 00:56:54.404274 | orchestrator | Sunday 29 March 2026 00:55:44 +0000 (0:00:01.443) 0:05:00.808 ********** 2026-03-29 00:56:54.404281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 00:56:54.404288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 00:56:54.404300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:56:54.404305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:56:54.404309 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 00:56:54.404321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 00:56:54.404325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:56:54.404333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:56:54.404342 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-29 00:56:54.404350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-29 00:56:54.404354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:56:54.404358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-29 00:56:54.404362 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404366 | orchestrator | 2026-03-29 00:56:54.404370 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-29 00:56:54.404374 | orchestrator | Sunday 29 March 2026 00:55:45 +0000 (0:00:00.956) 0:05:01.764 ********** 2026-03-29 00:56:54.404377 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404381 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404385 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404389 | orchestrator | 2026-03-29 00:56:54.404393 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-29 00:56:54.404397 | orchestrator | Sunday 29 March 2026 00:55:45 +0000 (0:00:00.419) 0:05:02.184 ********** 2026-03-29 00:56:54.404400 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404404 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404408 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404412 | orchestrator | 2026-03-29 00:56:54.404416 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-29 00:56:54.404420 | orchestrator | Sunday 29 March 2026 00:55:46 +0000 (0:00:01.444) 0:05:03.629 ********** 2026-03-29 00:56:54.404424 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.404427 | orchestrator | 2026-03-29 00:56:54.404431 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-29 00:56:54.404435 | orchestrator | Sunday 29 March 2026 00:55:48 +0000 (0:00:01.615) 0:05:05.244 ********** 2026-03-29 00:56:54.404441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:56:54.404449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:56:54.404457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-29 00:56:54.404461 | orchestrator | 2026-03-29 00:56:54.404465 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-29 00:56:54.404469 | orchestrator | Sunday 29 March 2026 00:55:50 +0000 (0:00:02.161) 0:05:07.405 ********** 2026-03-29 00:56:54.404473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 00:56:54.404477 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 00:56:54.404495 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-29 00:56:54.404517 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404523 | orchestrator | 2026-03-29 00:56:54.404528 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-29 00:56:54.404534 | orchestrator | Sunday 29 March 2026 00:55:51 +0000 (0:00:00.399) 0:05:07.805 ********** 2026-03-29 00:56:54.404540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 00:56:54.404546 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 00:56:54.404558 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-29 00:56:54.404570 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404622 | orchestrator | 2026-03-29 00:56:54.404627 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-29 00:56:54.404631 | orchestrator | Sunday 29 March 2026 00:55:52 +0000 (0:00:01.032) 0:05:08.838 ********** 2026-03-29 00:56:54.404635 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404638 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404642 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404646 | orchestrator | 2026-03-29 00:56:54.404650 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-29 00:56:54.404654 | orchestrator | Sunday 29 March 2026 00:55:52 +0000 (0:00:00.471) 0:05:09.309 ********** 2026-03-29 00:56:54.404658 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404662 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404666 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404669 | orchestrator | 2026-03-29 00:56:54.404673 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-29 00:56:54.404677 | orchestrator | Sunday 29 March 2026 00:55:54 +0000 (0:00:01.545) 0:05:10.855 ********** 2026-03-29 00:56:54.404681 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:56:54.404685 | orchestrator | 2026-03-29 00:56:54.404688 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-29 00:56:54.404692 | orchestrator | Sunday 29 March 2026 00:55:56 +0000 (0:00:02.055) 0:05:12.910 ********** 2026-03-29 00:56:54.404697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.404713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.404724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.404733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.404739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.404754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-29 00:56:54.404762 | orchestrator | 2026-03-29 00:56:54.404769 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-29 00:56:54.404776 | orchestrator | Sunday 29 March 2026 00:56:02 +0000 (0:00:05.916) 0:05:18.827 ********** 2026-03-29 00:56:54.404783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.404788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.404792 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.404808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.404812 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.404824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-29 00:56:54.404828 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404831 | orchestrator | 2026-03-29 00:56:54.404835 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-29 00:56:54.404839 | orchestrator | Sunday 29 March 2026 00:56:02 +0000 (0:00:00.630) 0:05:19.458 ********** 2026-03-29 00:56:54.404843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404872 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404886 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-29 00:56:54.404909 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404913 | orchestrator | 2026-03-29 00:56:54.404917 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-29 00:56:54.404921 | orchestrator | Sunday 29 March 2026 00:56:04 +0000 (0:00:01.743) 0:05:21.201 ********** 2026-03-29 00:56:54.404925 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.404928 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.404932 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.404936 | orchestrator | 2026-03-29 00:56:54.404940 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-29 00:56:54.404944 | orchestrator | Sunday 29 March 2026 00:56:05 +0000 (0:00:01.242) 0:05:22.443 ********** 2026-03-29 00:56:54.404948 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.404952 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.404955 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.404959 | orchestrator | 2026-03-29 00:56:54.404963 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-29 00:56:54.404970 | orchestrator | Sunday 29 March 2026 00:56:07 +0000 (0:00:02.044) 0:05:24.487 ********** 2026-03-29 00:56:54.404974 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.404978 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.404982 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.404986 | orchestrator | 2026-03-29 00:56:54.404990 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-29 00:56:54.404993 | orchestrator | Sunday 29 March 2026 00:56:08 +0000 (0:00:00.334) 0:05:24.823 ********** 2026-03-29 00:56:54.404997 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405001 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405005 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405009 | orchestrator | 2026-03-29 00:56:54.405012 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-29 00:56:54.405016 | orchestrator | Sunday 29 March 2026 00:56:08 +0000 (0:00:00.310) 0:05:25.133 ********** 2026-03-29 00:56:54.405020 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405024 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405028 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405031 | orchestrator | 2026-03-29 00:56:54.405035 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-29 00:56:54.405039 | orchestrator | Sunday 29 March 2026 00:56:09 +0000 (0:00:00.657) 0:05:25.791 ********** 2026-03-29 00:56:54.405043 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405047 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405050 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405054 | orchestrator | 2026-03-29 00:56:54.405058 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-29 00:56:54.405062 | orchestrator | Sunday 29 March 2026 00:56:09 +0000 (0:00:00.325) 0:05:26.116 ********** 2026-03-29 00:56:54.405066 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405069 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405073 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405077 | orchestrator | 2026-03-29 00:56:54.405081 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-29 00:56:54.405085 | orchestrator | Sunday 29 March 2026 00:56:09 +0000 (0:00:00.317) 0:05:26.434 ********** 2026-03-29 00:56:54.405088 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405092 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405096 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405100 | orchestrator | 2026-03-29 00:56:54.405103 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-29 00:56:54.405107 | orchestrator | Sunday 29 March 2026 00:56:10 +0000 (0:00:00.756) 0:05:27.190 ********** 2026-03-29 00:56:54.405111 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405115 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405119 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405123 | orchestrator | 2026-03-29 00:56:54.405127 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-29 00:56:54.405131 | orchestrator | Sunday 29 March 2026 00:56:11 +0000 (0:00:00.615) 0:05:27.806 ********** 2026-03-29 00:56:54.405134 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405138 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405142 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405146 | orchestrator | 2026-03-29 00:56:54.405150 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-29 00:56:54.405154 | orchestrator | Sunday 29 March 2026 00:56:11 +0000 (0:00:00.332) 0:05:28.138 ********** 2026-03-29 00:56:54.405157 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405161 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405165 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405169 | orchestrator | 2026-03-29 00:56:54.405175 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-29 00:56:54.405179 | orchestrator | Sunday 29 March 2026 00:56:12 +0000 (0:00:00.807) 0:05:28.946 ********** 2026-03-29 00:56:54.405188 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405191 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405195 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405199 | orchestrator | 2026-03-29 00:56:54.405203 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-29 00:56:54.405206 | orchestrator | Sunday 29 March 2026 00:56:13 +0000 (0:00:00.965) 0:05:29.911 ********** 2026-03-29 00:56:54.405210 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405214 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405218 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405221 | orchestrator | 2026-03-29 00:56:54.405225 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-29 00:56:54.405229 | orchestrator | Sunday 29 March 2026 00:56:13 +0000 (0:00:00.753) 0:05:30.665 ********** 2026-03-29 00:56:54.405233 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.405237 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.405240 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.405244 | orchestrator | 2026-03-29 00:56:54.405248 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-29 00:56:54.405255 | orchestrator | Sunday 29 March 2026 00:56:22 +0000 (0:00:08.064) 0:05:38.730 ********** 2026-03-29 00:56:54.405259 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405262 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405266 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405270 | orchestrator | 2026-03-29 00:56:54.405274 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-29 00:56:54.405278 | orchestrator | Sunday 29 March 2026 00:56:22 +0000 (0:00:00.662) 0:05:39.392 ********** 2026-03-29 00:56:54.405282 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.405285 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.405289 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.405293 | orchestrator | 2026-03-29 00:56:54.405297 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-29 00:56:54.405300 | orchestrator | Sunday 29 March 2026 00:56:37 +0000 (0:00:14.581) 0:05:53.973 ********** 2026-03-29 00:56:54.405304 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405308 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405312 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405315 | orchestrator | 2026-03-29 00:56:54.405319 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-29 00:56:54.405323 | orchestrator | Sunday 29 March 2026 00:56:38 +0000 (0:00:01.077) 0:05:55.051 ********** 2026-03-29 00:56:54.405327 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:56:54.405331 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:56:54.405334 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:56:54.405338 | orchestrator | 2026-03-29 00:56:54.405342 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-29 00:56:54.405346 | orchestrator | Sunday 29 March 2026 00:56:46 +0000 (0:00:08.256) 0:06:03.307 ********** 2026-03-29 00:56:54.405350 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405353 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405357 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405361 | orchestrator | 2026-03-29 00:56:54.405365 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-29 00:56:54.405369 | orchestrator | Sunday 29 March 2026 00:56:47 +0000 (0:00:00.403) 0:06:03.712 ********** 2026-03-29 00:56:54.405373 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405376 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405380 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405384 | orchestrator | 2026-03-29 00:56:54.405388 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-29 00:56:54.405392 | orchestrator | Sunday 29 March 2026 00:56:47 +0000 (0:00:00.386) 0:06:04.098 ********** 2026-03-29 00:56:54.405396 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405403 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405407 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405411 | orchestrator | 2026-03-29 00:56:54.405415 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-29 00:56:54.405418 | orchestrator | Sunday 29 March 2026 00:56:48 +0000 (0:00:00.700) 0:06:04.799 ********** 2026-03-29 00:56:54.405422 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405426 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405430 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405434 | orchestrator | 2026-03-29 00:56:54.405437 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-29 00:56:54.405441 | orchestrator | Sunday 29 March 2026 00:56:48 +0000 (0:00:00.334) 0:06:05.134 ********** 2026-03-29 00:56:54.405445 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405449 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405453 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405456 | orchestrator | 2026-03-29 00:56:54.405460 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-29 00:56:54.405464 | orchestrator | Sunday 29 March 2026 00:56:48 +0000 (0:00:00.332) 0:06:05.466 ********** 2026-03-29 00:56:54.405468 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:56:54.405471 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:56:54.405475 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:56:54.405480 | orchestrator | 2026-03-29 00:56:54.405486 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-29 00:56:54.405492 | orchestrator | Sunday 29 March 2026 00:56:49 +0000 (0:00:00.353) 0:06:05.819 ********** 2026-03-29 00:56:54.405499 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405505 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405510 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405516 | orchestrator | 2026-03-29 00:56:54.405522 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-29 00:56:54.405529 | orchestrator | Sunday 29 March 2026 00:56:50 +0000 (0:00:01.266) 0:06:07.086 ********** 2026-03-29 00:56:54.405536 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:56:54.405542 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:56:54.405548 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:56:54.405554 | orchestrator | 2026-03-29 00:56:54.405560 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:56:54.405570 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 00:56:54.405593 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 00:56:54.405597 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-29 00:56:54.405601 | orchestrator | 2026-03-29 00:56:54.405605 | orchestrator | 2026-03-29 00:56:54.405609 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:56:54.405613 | orchestrator | Sunday 29 March 2026 00:56:51 +0000 (0:00:00.773) 0:06:07.860 ********** 2026-03-29 00:56:54.405616 | orchestrator | =============================================================================== 2026-03-29 00:56:54.405620 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.58s 2026-03-29 00:56:54.405628 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.26s 2026-03-29 00:56:54.405632 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.06s 2026-03-29 00:56:54.405636 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.93s 2026-03-29 00:56:54.405640 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.92s 2026-03-29 00:56:54.405644 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.57s 2026-03-29 00:56:54.405652 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.21s 2026-03-29 00:56:54.405655 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.57s 2026-03-29 00:56:54.405659 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.41s 2026-03-29 00:56:54.405663 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.33s 2026-03-29 00:56:54.405667 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.19s 2026-03-29 00:56:54.405670 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.01s 2026-03-29 00:56:54.405674 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.98s 2026-03-29 00:56:54.405678 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.97s 2026-03-29 00:56:54.405682 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.87s 2026-03-29 00:56:54.405686 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.85s 2026-03-29 00:56:54.405689 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.78s 2026-03-29 00:56:54.405693 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.65s 2026-03-29 00:56:54.405697 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.63s 2026-03-29 00:56:54.405701 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.53s 2026-03-29 00:56:54.405705 | orchestrator | 2026-03-29 00:56:54 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:56:54.405709 | orchestrator | 2026-03-29 00:56:54 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:56:54.405713 | orchestrator | 2026-03-29 00:56:54 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:54.405717 | orchestrator | 2026-03-29 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:56:57.424928 | orchestrator | 2026-03-29 00:56:57 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:56:57.428299 | orchestrator | 2026-03-29 00:56:57 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:56:57.429310 | orchestrator | 2026-03-29 00:56:57 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:56:57.429350 | orchestrator | 2026-03-29 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:00.456758 | orchestrator | 2026-03-29 00:57:00 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:00.457020 | orchestrator | 2026-03-29 00:57:00 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:00.459031 | orchestrator | 2026-03-29 00:57:00 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:00.459084 | orchestrator | 2026-03-29 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:03.491732 | orchestrator | 2026-03-29 00:57:03 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:03.492225 | orchestrator | 2026-03-29 00:57:03 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:03.495608 | orchestrator | 2026-03-29 00:57:03 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:03.495661 | orchestrator | 2026-03-29 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:06.548835 | orchestrator | 2026-03-29 00:57:06 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:06.550730 | orchestrator | 2026-03-29 00:57:06 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:06.552280 | orchestrator | 2026-03-29 00:57:06 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:06.552412 | orchestrator | 2026-03-29 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:09.589078 | orchestrator | 2026-03-29 00:57:09 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:09.592516 | orchestrator | 2026-03-29 00:57:09 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:09.596102 | orchestrator | 2026-03-29 00:57:09 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:09.596155 | orchestrator | 2026-03-29 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:12.635774 | orchestrator | 2026-03-29 00:57:12 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:12.636204 | orchestrator | 2026-03-29 00:57:12 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:12.637686 | orchestrator | 2026-03-29 00:57:12 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:12.637723 | orchestrator | 2026-03-29 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:15.683654 | orchestrator | 2026-03-29 00:57:15 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:15.684493 | orchestrator | 2026-03-29 00:57:15 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:15.686612 | orchestrator | 2026-03-29 00:57:15 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:15.686853 | orchestrator | 2026-03-29 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:18.720228 | orchestrator | 2026-03-29 00:57:18 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:18.720775 | orchestrator | 2026-03-29 00:57:18 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:18.721480 | orchestrator | 2026-03-29 00:57:18 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:18.721706 | orchestrator | 2026-03-29 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:21.755755 | orchestrator | 2026-03-29 00:57:21 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:21.760084 | orchestrator | 2026-03-29 00:57:21 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:21.760750 | orchestrator | 2026-03-29 00:57:21 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:21.760781 | orchestrator | 2026-03-29 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:24.797911 | orchestrator | 2026-03-29 00:57:24 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:24.798599 | orchestrator | 2026-03-29 00:57:24 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:24.802152 | orchestrator | 2026-03-29 00:57:24 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:24.802216 | orchestrator | 2026-03-29 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:27.828529 | orchestrator | 2026-03-29 00:57:27 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:27.831366 | orchestrator | 2026-03-29 00:57:27 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:27.831996 | orchestrator | 2026-03-29 00:57:27 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:27.832171 | orchestrator | 2026-03-29 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:30.884171 | orchestrator | 2026-03-29 00:57:30 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:30.885308 | orchestrator | 2026-03-29 00:57:30 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:30.888590 | orchestrator | 2026-03-29 00:57:30 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:30.888642 | orchestrator | 2026-03-29 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:33.951147 | orchestrator | 2026-03-29 00:57:33 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:33.952222 | orchestrator | 2026-03-29 00:57:33 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:33.958171 | orchestrator | 2026-03-29 00:57:33 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:33.961818 | orchestrator | 2026-03-29 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:37.012861 | orchestrator | 2026-03-29 00:57:37 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:37.014479 | orchestrator | 2026-03-29 00:57:37 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:37.016875 | orchestrator | 2026-03-29 00:57:37 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:37.016914 | orchestrator | 2026-03-29 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:40.061114 | orchestrator | 2026-03-29 00:57:40 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:40.067503 | orchestrator | 2026-03-29 00:57:40 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:40.070967 | orchestrator | 2026-03-29 00:57:40 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:40.072733 | orchestrator | 2026-03-29 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:43.120965 | orchestrator | 2026-03-29 00:57:43 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:43.124351 | orchestrator | 2026-03-29 00:57:43 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:43.127061 | orchestrator | 2026-03-29 00:57:43 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:43.127131 | orchestrator | 2026-03-29 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:46.180235 | orchestrator | 2026-03-29 00:57:46 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:46.180999 | orchestrator | 2026-03-29 00:57:46 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:46.183487 | orchestrator | 2026-03-29 00:57:46 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:46.183545 | orchestrator | 2026-03-29 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:49.231828 | orchestrator | 2026-03-29 00:57:49 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:49.233726 | orchestrator | 2026-03-29 00:57:49 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:49.236072 | orchestrator | 2026-03-29 00:57:49 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:49.236244 | orchestrator | 2026-03-29 00:57:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:52.284360 | orchestrator | 2026-03-29 00:57:52 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:52.286669 | orchestrator | 2026-03-29 00:57:52 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:52.287364 | orchestrator | 2026-03-29 00:57:52 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:52.287391 | orchestrator | 2026-03-29 00:57:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:55.330639 | orchestrator | 2026-03-29 00:57:55 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:55.331299 | orchestrator | 2026-03-29 00:57:55 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:55.333033 | orchestrator | 2026-03-29 00:57:55 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:55.333481 | orchestrator | 2026-03-29 00:57:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:57:58.392148 | orchestrator | 2026-03-29 00:57:58 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:57:58.394174 | orchestrator | 2026-03-29 00:57:58 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:57:58.396549 | orchestrator | 2026-03-29 00:57:58 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:57:58.397330 | orchestrator | 2026-03-29 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:01.441969 | orchestrator | 2026-03-29 00:58:01 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:01.444641 | orchestrator | 2026-03-29 00:58:01 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:01.446796 | orchestrator | 2026-03-29 00:58:01 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:01.446957 | orchestrator | 2026-03-29 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:04.491597 | orchestrator | 2026-03-29 00:58:04 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:04.492795 | orchestrator | 2026-03-29 00:58:04 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:04.493621 | orchestrator | 2026-03-29 00:58:04 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:04.493655 | orchestrator | 2026-03-29 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:07.540224 | orchestrator | 2026-03-29 00:58:07 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:07.542567 | orchestrator | 2026-03-29 00:58:07 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:07.543725 | orchestrator | 2026-03-29 00:58:07 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:07.544218 | orchestrator | 2026-03-29 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:10.582302 | orchestrator | 2026-03-29 00:58:10 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:10.583906 | orchestrator | 2026-03-29 00:58:10 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:10.586243 | orchestrator | 2026-03-29 00:58:10 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:10.586298 | orchestrator | 2026-03-29 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:13.642179 | orchestrator | 2026-03-29 00:58:13 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:13.643769 | orchestrator | 2026-03-29 00:58:13 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:13.644995 | orchestrator | 2026-03-29 00:58:13 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:13.645035 | orchestrator | 2026-03-29 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:16.703590 | orchestrator | 2026-03-29 00:58:16 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:16.708310 | orchestrator | 2026-03-29 00:58:16 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:16.715221 | orchestrator | 2026-03-29 00:58:16 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:16.715267 | orchestrator | 2026-03-29 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:19.755921 | orchestrator | 2026-03-29 00:58:19 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:19.757313 | orchestrator | 2026-03-29 00:58:19 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:19.758831 | orchestrator | 2026-03-29 00:58:19 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:19.758972 | orchestrator | 2026-03-29 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:22.810300 | orchestrator | 2026-03-29 00:58:22 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:22.813483 | orchestrator | 2026-03-29 00:58:22 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:22.815831 | orchestrator | 2026-03-29 00:58:22 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:22.815888 | orchestrator | 2026-03-29 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:25.871163 | orchestrator | 2026-03-29 00:58:25 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:25.873451 | orchestrator | 2026-03-29 00:58:25 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:25.877020 | orchestrator | 2026-03-29 00:58:25 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:25.877077 | orchestrator | 2026-03-29 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:28.925128 | orchestrator | 2026-03-29 00:58:28 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:28.926900 | orchestrator | 2026-03-29 00:58:28 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:28.928288 | orchestrator | 2026-03-29 00:58:28 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:28.928429 | orchestrator | 2026-03-29 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:31.973840 | orchestrator | 2026-03-29 00:58:31 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:31.976062 | orchestrator | 2026-03-29 00:58:31 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:31.978444 | orchestrator | 2026-03-29 00:58:31 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:31.978561 | orchestrator | 2026-03-29 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:35.021318 | orchestrator | 2026-03-29 00:58:35 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:35.022873 | orchestrator | 2026-03-29 00:58:35 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:35.026481 | orchestrator | 2026-03-29 00:58:35 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:35.026609 | orchestrator | 2026-03-29 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:38.078253 | orchestrator | 2026-03-29 00:58:38 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:38.082472 | orchestrator | 2026-03-29 00:58:38 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:38.085164 | orchestrator | 2026-03-29 00:58:38 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:38.086476 | orchestrator | 2026-03-29 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:41.130002 | orchestrator | 2026-03-29 00:58:41 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:41.133155 | orchestrator | 2026-03-29 00:58:41 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:41.134698 | orchestrator | 2026-03-29 00:58:41 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:41.135020 | orchestrator | 2026-03-29 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:44.183460 | orchestrator | 2026-03-29 00:58:44 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:44.186468 | orchestrator | 2026-03-29 00:58:44 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:44.188582 | orchestrator | 2026-03-29 00:58:44 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:44.188621 | orchestrator | 2026-03-29 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:47.244315 | orchestrator | 2026-03-29 00:58:47 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:47.247452 | orchestrator | 2026-03-29 00:58:47 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:47.249892 | orchestrator | 2026-03-29 00:58:47 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:47.250205 | orchestrator | 2026-03-29 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:50.297777 | orchestrator | 2026-03-29 00:58:50 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:50.298776 | orchestrator | 2026-03-29 00:58:50 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:50.301316 | orchestrator | 2026-03-29 00:58:50 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:50.301383 | orchestrator | 2026-03-29 00:58:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:53.345927 | orchestrator | 2026-03-29 00:58:53 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:53.348591 | orchestrator | 2026-03-29 00:58:53 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:53.351787 | orchestrator | 2026-03-29 00:58:53 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:53.351834 | orchestrator | 2026-03-29 00:58:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:56.396533 | orchestrator | 2026-03-29 00:58:56 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:56.398151 | orchestrator | 2026-03-29 00:58:56 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:56.401615 | orchestrator | 2026-03-29 00:58:56 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:56.401662 | orchestrator | 2026-03-29 00:58:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:58:59.439602 | orchestrator | 2026-03-29 00:58:59 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:58:59.440896 | orchestrator | 2026-03-29 00:58:59 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:58:59.443533 | orchestrator | 2026-03-29 00:58:59 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:58:59.443649 | orchestrator | 2026-03-29 00:58:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:02.482149 | orchestrator | 2026-03-29 00:59:02 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:02.483193 | orchestrator | 2026-03-29 00:59:02 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:02.484327 | orchestrator | 2026-03-29 00:59:02 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:59:02.484358 | orchestrator | 2026-03-29 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:05.523384 | orchestrator | 2026-03-29 00:59:05 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:05.525718 | orchestrator | 2026-03-29 00:59:05 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:05.528038 | orchestrator | 2026-03-29 00:59:05 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:59:05.528093 | orchestrator | 2026-03-29 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:08.570523 | orchestrator | 2026-03-29 00:59:08 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:08.571594 | orchestrator | 2026-03-29 00:59:08 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:08.573334 | orchestrator | 2026-03-29 00:59:08 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:59:08.573407 | orchestrator | 2026-03-29 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:11.612204 | orchestrator | 2026-03-29 00:59:11 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:11.613452 | orchestrator | 2026-03-29 00:59:11 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:11.614357 | orchestrator | 2026-03-29 00:59:11 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:59:11.614390 | orchestrator | 2026-03-29 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:14.667862 | orchestrator | 2026-03-29 00:59:14 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:14.669963 | orchestrator | 2026-03-29 00:59:14 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:14.671686 | orchestrator | 2026-03-29 00:59:14 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:59:14.671760 | orchestrator | 2026-03-29 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:17.726709 | orchestrator | 2026-03-29 00:59:17 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:17.726924 | orchestrator | 2026-03-29 00:59:17 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:17.729065 | orchestrator | 2026-03-29 00:59:17 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:59:17.729136 | orchestrator | 2026-03-29 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:20.786178 | orchestrator | 2026-03-29 00:59:20 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:20.787131 | orchestrator | 2026-03-29 00:59:20 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:20.790682 | orchestrator | 2026-03-29 00:59:20 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state STARTED 2026-03-29 00:59:20.790751 | orchestrator | 2026-03-29 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:23.840797 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:23.843171 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:23.850439 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task 99268bad-17e8-4785-b00f-c5fdff712ea2 is in state SUCCESS 2026-03-29 00:59:23.852524 | orchestrator | 2026-03-29 00:59:23.852698 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 00:59:23.852714 | orchestrator | 2.16.14 2026-03-29 00:59:23.852724 | orchestrator | 2026-03-29 00:59:23.852733 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-29 00:59:23.852743 | orchestrator | 2026-03-29 00:59:23.852752 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 00:59:23.852760 | orchestrator | Sunday 29 March 2026 00:48:13 +0000 (0:00:00.889) 0:00:00.889 ********** 2026-03-29 00:59:23.852781 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.852815 | orchestrator | 2026-03-29 00:59:23.852825 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 00:59:23.852834 | orchestrator | Sunday 29 March 2026 00:48:14 +0000 (0:00:01.303) 0:00:02.193 ********** 2026-03-29 00:59:23.852843 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.852852 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.852861 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.852869 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.852878 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.852887 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.852895 | orchestrator | 2026-03-29 00:59:23.852904 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 00:59:23.852913 | orchestrator | Sunday 29 March 2026 00:48:16 +0000 (0:00:01.921) 0:00:04.114 ********** 2026-03-29 00:59:23.852941 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.852952 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.852963 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.852999 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.853009 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.853019 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.853028 | orchestrator | 2026-03-29 00:59:23.853039 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 00:59:23.853074 | orchestrator | Sunday 29 March 2026 00:48:17 +0000 (0:00:00.815) 0:00:04.930 ********** 2026-03-29 00:59:23.853084 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.853094 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.853103 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.853113 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.853123 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.853132 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.853142 | orchestrator | 2026-03-29 00:59:23.853152 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 00:59:23.853216 | orchestrator | Sunday 29 March 2026 00:48:18 +0000 (0:00:01.002) 0:00:05.932 ********** 2026-03-29 00:59:23.853229 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.853256 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.853267 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.853277 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.853286 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.853296 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.853305 | orchestrator | 2026-03-29 00:59:23.853314 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 00:59:23.853339 | orchestrator | Sunday 29 March 2026 00:48:19 +0000 (0:00:00.924) 0:00:06.857 ********** 2026-03-29 00:59:23.853371 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.853381 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.853389 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.853398 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.853441 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.853552 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.853562 | orchestrator | 2026-03-29 00:59:23.853570 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 00:59:23.853579 | orchestrator | Sunday 29 March 2026 00:48:20 +0000 (0:00:00.851) 0:00:07.709 ********** 2026-03-29 00:59:23.853588 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.853597 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.853606 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.853614 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.853623 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.853631 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.853640 | orchestrator | 2026-03-29 00:59:23.853649 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 00:59:23.853658 | orchestrator | Sunday 29 March 2026 00:48:22 +0000 (0:00:01.934) 0:00:09.643 ********** 2026-03-29 00:59:23.853667 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.853677 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.853686 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.853694 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.853703 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.853712 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.853720 | orchestrator | 2026-03-29 00:59:23.853729 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 00:59:23.853738 | orchestrator | Sunday 29 March 2026 00:48:23 +0000 (0:00:01.281) 0:00:10.925 ********** 2026-03-29 00:59:23.853776 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.853786 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.853795 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.853804 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.853813 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.853821 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.853847 | orchestrator | 2026-03-29 00:59:23.853857 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 00:59:23.853866 | orchestrator | Sunday 29 March 2026 00:48:24 +0000 (0:00:00.992) 0:00:11.917 ********** 2026-03-29 00:59:23.853942 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:59:23.853952 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:59:23.853961 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:59:23.853970 | orchestrator | 2026-03-29 00:59:23.853978 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 00:59:23.854007 | orchestrator | Sunday 29 March 2026 00:48:25 +0000 (0:00:00.802) 0:00:12.719 ********** 2026-03-29 00:59:23.854059 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.854068 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.854077 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.854102 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.854122 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.854184 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.854205 | orchestrator | 2026-03-29 00:59:23.854220 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 00:59:23.854235 | orchestrator | Sunday 29 March 2026 00:48:26 +0000 (0:00:01.558) 0:00:14.278 ********** 2026-03-29 00:59:23.854276 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:59:23.854303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:59:23.854320 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:59:23.854337 | orchestrator | 2026-03-29 00:59:23.854354 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 00:59:23.854366 | orchestrator | Sunday 29 March 2026 00:48:29 +0000 (0:00:02.715) 0:00:16.993 ********** 2026-03-29 00:59:23.854414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:59:23.854424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:59:23.854432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:59:23.854441 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.854473 | orchestrator | 2026-03-29 00:59:23.854484 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 00:59:23.854493 | orchestrator | Sunday 29 March 2026 00:48:30 +0000 (0:00:00.574) 0:00:17.568 ********** 2026-03-29 00:59:23.854503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854532 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.854540 | orchestrator | 2026-03-29 00:59:23.854549 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 00:59:23.854558 | orchestrator | Sunday 29 March 2026 00:48:31 +0000 (0:00:01.029) 0:00:18.598 ********** 2026-03-29 00:59:23.854568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854597 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.854615 | orchestrator | 2026-03-29 00:59:23.854624 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 00:59:23.854633 | orchestrator | Sunday 29 March 2026 00:48:31 +0000 (0:00:00.481) 0:00:19.079 ********** 2026-03-29 00:59:23.854718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 00:48:27.591385', 'end': '2026-03-29 00:48:27.667179', 'delta': '0:00:00.075794', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 00:48:28.624963', 'end': '2026-03-29 00:48:28.696339', 'delta': '0:00:00.071376', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 00:48:29.312826', 'end': '2026-03-29 00:48:29.390358', 'delta': '0:00:00.077532', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.854756 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.854764 | orchestrator | 2026-03-29 00:59:23.854773 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 00:59:23.854782 | orchestrator | Sunday 29 March 2026 00:48:32 +0000 (0:00:00.440) 0:00:19.520 ********** 2026-03-29 00:59:23.854813 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.854822 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.854831 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.854840 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.854848 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.854857 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.854866 | orchestrator | 2026-03-29 00:59:23.854874 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 00:59:23.854883 | orchestrator | Sunday 29 March 2026 00:48:34 +0000 (0:00:02.337) 0:00:21.857 ********** 2026-03-29 00:59:23.854892 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.854901 | orchestrator | 2026-03-29 00:59:23.854909 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 00:59:23.854918 | orchestrator | Sunday 29 March 2026 00:48:35 +0000 (0:00:01.270) 0:00:23.128 ********** 2026-03-29 00:59:23.854927 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.854935 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.854944 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.854953 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.854968 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.854977 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.854985 | orchestrator | 2026-03-29 00:59:23.854994 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 00:59:23.855003 | orchestrator | Sunday 29 March 2026 00:48:37 +0000 (0:00:01.443) 0:00:24.571 ********** 2026-03-29 00:59:23.855011 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.855020 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855029 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.855038 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.855046 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.855055 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.855064 | orchestrator | 2026-03-29 00:59:23.855079 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 00:59:23.855094 | orchestrator | Sunday 29 March 2026 00:48:39 +0000 (0:00:02.348) 0:00:26.920 ********** 2026-03-29 00:59:23.855108 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855158 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.855176 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.855191 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.855207 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.855223 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.855238 | orchestrator | 2026-03-29 00:59:23.855253 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 00:59:23.855268 | orchestrator | Sunday 29 March 2026 00:48:40 +0000 (0:00:01.326) 0:00:28.246 ********** 2026-03-29 00:59:23.855277 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855285 | orchestrator | 2026-03-29 00:59:23.855294 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 00:59:23.855302 | orchestrator | Sunday 29 March 2026 00:48:41 +0000 (0:00:00.137) 0:00:28.384 ********** 2026-03-29 00:59:23.855311 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855319 | orchestrator | 2026-03-29 00:59:23.855328 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 00:59:23.855337 | orchestrator | Sunday 29 March 2026 00:48:41 +0000 (0:00:00.218) 0:00:28.602 ********** 2026-03-29 00:59:23.855432 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855445 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.855477 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.855494 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.855503 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.855512 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.855520 | orchestrator | 2026-03-29 00:59:23.855529 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 00:59:23.855538 | orchestrator | Sunday 29 March 2026 00:48:41 +0000 (0:00:00.646) 0:00:29.249 ********** 2026-03-29 00:59:23.855547 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855556 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.855565 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.855573 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.855605 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.855615 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.855626 | orchestrator | 2026-03-29 00:59:23.855641 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 00:59:23.855664 | orchestrator | Sunday 29 March 2026 00:48:42 +0000 (0:00:00.777) 0:00:30.026 ********** 2026-03-29 00:59:23.855678 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855692 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.855706 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.855720 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.855733 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.855746 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.855761 | orchestrator | 2026-03-29 00:59:23.855787 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 00:59:23.855803 | orchestrator | Sunday 29 March 2026 00:48:43 +0000 (0:00:00.578) 0:00:30.605 ********** 2026-03-29 00:59:23.855817 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855884 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.855895 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.855903 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.855912 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.855921 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.855929 | orchestrator | 2026-03-29 00:59:23.855938 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 00:59:23.855947 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.957) 0:00:31.562 ********** 2026-03-29 00:59:23.855955 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.855964 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.855972 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.855981 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.855989 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.856019 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.856028 | orchestrator | 2026-03-29 00:59:23.856037 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 00:59:23.856045 | orchestrator | Sunday 29 March 2026 00:48:44 +0000 (0:00:00.678) 0:00:32.241 ********** 2026-03-29 00:59:23.856054 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.856062 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.856071 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.856080 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.856089 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.856098 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.856106 | orchestrator | 2026-03-29 00:59:23.856115 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 00:59:23.856125 | orchestrator | Sunday 29 March 2026 00:48:45 +0000 (0:00:00.950) 0:00:33.191 ********** 2026-03-29 00:59:23.856133 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.856142 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.856151 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.856160 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.856168 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.856177 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.856186 | orchestrator | 2026-03-29 00:59:23.856195 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 00:59:23.856203 | orchestrator | Sunday 29 March 2026 00:48:47 +0000 (0:00:01.323) 0:00:34.515 ********** 2026-03-29 00:59:23.856216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7', 'dm-uuid-LVM-9b9wJNrZETWOFpxcna2wuDQPfWOghzez0v4d7ZugYsCTYvBdsaVZHmcJ0Y6u0VzP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8', 'dm-uuid-LVM-6qZ8Xz3PCo1t1iPHk1JSrR1oaX7zPMLsbbh5y7RBImcMbddwWlsb8BK6SH7G4D1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856305 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292', 'dm-uuid-LVM-HmDwxas3Vt7MoPpfiLodPOIM77MdTsZVDz7gRgsdG1f2rJXPvbHyToe5zAfcWUEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dz2DBe-zqa5-HAl3-4e2z-wvY0-8aLh-eT0uGT', 'scsi-0QEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551', 'scsi-SQEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef', 'dm-uuid-LVM-7XZFubPM5hWk3Oi0Q4YKj9G7POqXT9ZprgBP3A37GbVWoecRs7xdEMHzSdODNj4z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535', 'dm-uuid-LVM-IAp02j5g2oQ3zhw0uSFtEtUX8CGfBcpguA02yzkM0hs4bmzvbcYzPv39fZqX0dZl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nsF1OP-8KYf-Rtrg-mWx0-i8JD-uxdQ-8WncQo', 'scsi-0QEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf', 'scsi-SQEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c', 'scsi-SQEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069', 'dm-uuid-LVM-xyd1men8VV471cj3uej9m9aQwqp84vvGIafLrRukhWiMEyVwTXBzWbGsreYhDDeI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856756 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vNrahe-Gh3f-fFop-2AfQ-EXmq-ysXK-ZDOYGr', 'scsi-0QEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c', 'scsi-SQEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gnl4if-m9ue-JNEF-UVVM-UBfY-i0OO-QeQRjB', 'scsi-0QEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d', 'scsi-SQEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53', 'scsi-SQEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.856901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856927 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.856952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XxtEnX-eYq8-LT57-fSiD-l35o-C8D1-uuy9bN', 'scsi-0QEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41', 'scsi-SQEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0J0qio-0txj-yjdo-d34w-rvdv-XnOu-nkLd7k', 'scsi-0QEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89', 'scsi-SQEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c', 'scsi-SQEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.856995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.857004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.857117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.857127 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.857136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857154 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.857164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857173 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.857182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part1', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part14', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part15', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part16', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.857287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.857305 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.857314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 00:59:23.857402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part1', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part14', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part15', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part16', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.857417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 00:59:23.857426 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.857435 | orchestrator | 2026-03-29 00:59:23.857445 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 00:59:23.857485 | orchestrator | Sunday 29 March 2026 00:48:49 +0000 (0:00:02.552) 0:00:37.067 ********** 2026-03-29 00:59:23.857496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7', 'dm-uuid-LVM-9b9wJNrZETWOFpxcna2wuDQPfWOghzez0v4d7ZugYsCTYvBdsaVZHmcJ0Y6u0VzP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8', 'dm-uuid-LVM-6qZ8Xz3PCo1t1iPHk1JSrR1oaX7zPMLsbbh5y7RBImcMbddwWlsb8BK6SH7G4D1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857530 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857554 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535', 'dm-uuid-LVM-IAp02j5g2oQ3zhw0uSFtEtUX8CGfBcpguA02yzkM0hs4bmzvbcYzPv39fZqX0dZl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857578 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857593 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069', 'dm-uuid-LVM-xyd1men8VV471cj3uej9m9aQwqp84vvGIafLrRukhWiMEyVwTXBzWbGsreYhDDeI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857680 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292', 'dm-uuid-LVM-HmDwxas3Vt7MoPpfiLodPOIM77MdTsZVDz7gRgsdG1f2rJXPvbHyToe5zAfcWUEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef', 'dm-uuid-LVM-7XZFubPM5hWk3Oi0Q4YKj9G7POqXT9ZprgBP3A37GbVWoecRs7xdEMHzSdODNj4z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dz2DBe-zqa5-HAl3-4e2z-wvY0-8aLh-eT0uGT', 'scsi-0QEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551', 'scsi-SQEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.857736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858206 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nsF1OP-8KYf-Rtrg-mWx0-i8JD-uxdQ-8WncQo', 'scsi-0QEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf', 'scsi-SQEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c', 'scsi-SQEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858266 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858282 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858295 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858329 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858339 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858348 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858369 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858384 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858393 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858402 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858424 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858483 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.858506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858520 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858530 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858549 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858571 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf152443-0fe9-4f46-a676-7ec0334a56b1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858604 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858622 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858632 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858641 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.858650 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vNrahe-Gh3f-fFop-2AfQ-EXmq-ysXK-ZDOYGr', 'scsi-0QEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c', 'scsi-SQEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858659 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858686 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gnl4if-m9ue-JNEF-UVVM-UBfY-i0OO-QeQRjB', 'scsi-0QEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d', 'scsi-SQEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858700 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858709 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858718 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858753 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858762 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53', 'scsi-SQEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858803 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part1', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part14', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part15', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part16', 'scsi-SQEMU_QEMU_HARDDISK_ad319918-57bd-4a4f-a2a3-8dffba7c3c21-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858820 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858833 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858843 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XxtEnX-eYq8-LT57-fSiD-l35o-C8D1-uuy9bN', 'scsi-0QEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41', 'scsi-SQEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858858 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.858880 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858891 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0J0qio-0txj-yjdo-d34w-rvdv-XnOu-nkLd7k', 'scsi-0QEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89', 'scsi-SQEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858912 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.858923 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858935 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c', 'scsi-SQEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.858995 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part1', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part14', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part15', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part16', 'scsi-SQEMU_QEMU_HARDDISK_953ed705-bbcf-48ea-89de-0fb88bd35712-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.859016 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.859031 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.859046 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 00:59:23.859065 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.859079 | orchestrator | 2026-03-29 00:59:23.859101 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 00:59:23.859118 | orchestrator | Sunday 29 March 2026 00:48:51 +0000 (0:00:01.811) 0:00:38.878 ********** 2026-03-29 00:59:23.859132 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.859146 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.859161 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.859176 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.859193 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.859209 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.859226 | orchestrator | 2026-03-29 00:59:23.859243 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 00:59:23.859270 | orchestrator | Sunday 29 March 2026 00:48:53 +0000 (0:00:01.822) 0:00:40.701 ********** 2026-03-29 00:59:23.859279 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.859288 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.859297 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.859305 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.859314 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.859323 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.859331 | orchestrator | 2026-03-29 00:59:23.859340 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 00:59:23.859349 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:00.841) 0:00:41.542 ********** 2026-03-29 00:59:23.859358 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.859366 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.859375 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.859383 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.859392 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.859400 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.859409 | orchestrator | 2026-03-29 00:59:23.859417 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 00:59:23.859426 | orchestrator | Sunday 29 March 2026 00:48:54 +0000 (0:00:00.749) 0:00:42.292 ********** 2026-03-29 00:59:23.859435 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.859443 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.859571 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.859583 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.859593 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.859603 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.859613 | orchestrator | 2026-03-29 00:59:23.859623 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 00:59:23.859632 | orchestrator | Sunday 29 March 2026 00:48:55 +0000 (0:00:00.695) 0:00:42.987 ********** 2026-03-29 00:59:23.859642 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.859652 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.859662 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.859671 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.859681 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.859690 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.859700 | orchestrator | 2026-03-29 00:59:23.859710 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 00:59:23.859729 | orchestrator | Sunday 29 March 2026 00:48:57 +0000 (0:00:01.465) 0:00:44.453 ********** 2026-03-29 00:59:23.859739 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.859749 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.859758 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.859768 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.859777 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.859787 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.859797 | orchestrator | 2026-03-29 00:59:23.859806 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 00:59:23.859816 | orchestrator | Sunday 29 March 2026 00:48:57 +0000 (0:00:00.678) 0:00:45.131 ********** 2026-03-29 00:59:23.859826 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 00:59:23.859836 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 00:59:23.859845 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 00:59:23.859855 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 00:59:23.859865 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-29 00:59:23.859874 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-29 00:59:23.859884 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 00:59:23.859893 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-29 00:59:23.859933 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-29 00:59:23.859944 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 00:59:23.859954 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 00:59:23.859963 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-29 00:59:23.859973 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:59:23.859982 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 00:59:23.859992 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 00:59:23.860001 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-29 00:59:23.860011 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 00:59:23.860020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 00:59:23.860030 | orchestrator | 2026-03-29 00:59:23.860040 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 00:59:23.860049 | orchestrator | Sunday 29 March 2026 00:49:01 +0000 (0:00:04.149) 0:00:49.281 ********** 2026-03-29 00:59:23.860056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 00:59:23.860064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 00:59:23.860072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 00:59:23.860080 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.860088 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 00:59:23.860096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 00:59:23.860103 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 00:59:23.860111 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.860119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 00:59:23.860134 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 00:59:23.860143 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 00:59:23.860150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:59:23.860158 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:59:23.860166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:59:23.860174 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.860187 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-29 00:59:23.860195 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-29 00:59:23.860208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-29 00:59:23.860216 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.860224 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.860232 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-29 00:59:23.860240 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-29 00:59:23.860248 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-29 00:59:23.860256 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.860264 | orchestrator | 2026-03-29 00:59:23.860271 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 00:59:23.860279 | orchestrator | Sunday 29 March 2026 00:49:02 +0000 (0:00:00.779) 0:00:50.061 ********** 2026-03-29 00:59:23.860287 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.860295 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.860303 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.860311 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.860319 | orchestrator | 2026-03-29 00:59:23.860327 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 00:59:23.860335 | orchestrator | Sunday 29 March 2026 00:49:04 +0000 (0:00:01.680) 0:00:51.742 ********** 2026-03-29 00:59:23.860343 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.860351 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.860359 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.860367 | orchestrator | 2026-03-29 00:59:23.860375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 00:59:23.860383 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:00.788) 0:00:52.530 ********** 2026-03-29 00:59:23.860390 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.860398 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.860406 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.860414 | orchestrator | 2026-03-29 00:59:23.860422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 00:59:23.860430 | orchestrator | Sunday 29 March 2026 00:49:05 +0000 (0:00:00.762) 0:00:53.292 ********** 2026-03-29 00:59:23.860438 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.860446 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.860489 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.860497 | orchestrator | 2026-03-29 00:59:23.860505 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 00:59:23.860513 | orchestrator | Sunday 29 March 2026 00:49:06 +0000 (0:00:00.774) 0:00:54.067 ********** 2026-03-29 00:59:23.860521 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.860529 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.860537 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.860545 | orchestrator | 2026-03-29 00:59:23.860553 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 00:59:23.860561 | orchestrator | Sunday 29 March 2026 00:49:07 +0000 (0:00:01.012) 0:00:55.079 ********** 2026-03-29 00:59:23.860569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.860577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.860585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.860592 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.860600 | orchestrator | 2026-03-29 00:59:23.860608 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 00:59:23.860616 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:00.503) 0:00:55.583 ********** 2026-03-29 00:59:23.860624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.860632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.860646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.860654 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.860662 | orchestrator | 2026-03-29 00:59:23.860670 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 00:59:23.860678 | orchestrator | Sunday 29 March 2026 00:49:08 +0000 (0:00:00.393) 0:00:55.977 ********** 2026-03-29 00:59:23.860685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.860694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.860702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.860709 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.860717 | orchestrator | 2026-03-29 00:59:23.860725 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 00:59:23.860733 | orchestrator | Sunday 29 March 2026 00:49:09 +0000 (0:00:00.411) 0:00:56.389 ********** 2026-03-29 00:59:23.860741 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.860749 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.860757 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.860764 | orchestrator | 2026-03-29 00:59:23.860771 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 00:59:23.860778 | orchestrator | Sunday 29 March 2026 00:49:09 +0000 (0:00:00.461) 0:00:56.850 ********** 2026-03-29 00:59:23.860784 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 00:59:23.860791 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 00:59:23.860803 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 00:59:23.860809 | orchestrator | 2026-03-29 00:59:23.860816 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 00:59:23.860823 | orchestrator | Sunday 29 March 2026 00:49:10 +0000 (0:00:01.435) 0:00:58.286 ********** 2026-03-29 00:59:23.860829 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:59:23.860836 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:59:23.860846 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:59:23.860853 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 00:59:23.860860 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 00:59:23.860866 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 00:59:23.860873 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 00:59:23.860880 | orchestrator | 2026-03-29 00:59:23.860886 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 00:59:23.860893 | orchestrator | Sunday 29 March 2026 00:49:11 +0000 (0:00:00.853) 0:00:59.140 ********** 2026-03-29 00:59:23.860900 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:59:23.860906 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:59:23.860913 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:59:23.860919 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 00:59:23.860926 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 00:59:23.860933 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 00:59:23.860939 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 00:59:23.860946 | orchestrator | 2026-03-29 00:59:23.860952 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:59:23.860959 | orchestrator | Sunday 29 March 2026 00:49:13 +0000 (0:00:01.817) 0:01:00.957 ********** 2026-03-29 00:59:23.860966 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.860979 | orchestrator | 2026-03-29 00:59:23.860985 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:59:23.860992 | orchestrator | Sunday 29 March 2026 00:49:14 +0000 (0:00:01.106) 0:01:02.064 ********** 2026-03-29 00:59:23.860999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.861006 | orchestrator | 2026-03-29 00:59:23.861012 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:59:23.861019 | orchestrator | Sunday 29 March 2026 00:49:15 +0000 (0:00:01.266) 0:01:03.330 ********** 2026-03-29 00:59:23.861026 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861032 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861039 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861045 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.861052 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.861059 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.861065 | orchestrator | 2026-03-29 00:59:23.861072 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:59:23.861079 | orchestrator | Sunday 29 March 2026 00:49:17 +0000 (0:00:01.425) 0:01:04.755 ********** 2026-03-29 00:59:23.861085 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861092 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861099 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861105 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861112 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861119 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861125 | orchestrator | 2026-03-29 00:59:23.861132 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:59:23.861139 | orchestrator | Sunday 29 March 2026 00:49:18 +0000 (0:00:00.995) 0:01:05.750 ********** 2026-03-29 00:59:23.861145 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861152 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861159 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861165 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861172 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861179 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861185 | orchestrator | 2026-03-29 00:59:23.861192 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:59:23.861199 | orchestrator | Sunday 29 March 2026 00:49:19 +0000 (0:00:01.170) 0:01:06.921 ********** 2026-03-29 00:59:23.861205 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861212 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861218 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861225 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861232 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861238 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861245 | orchestrator | 2026-03-29 00:59:23.861252 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:59:23.861259 | orchestrator | Sunday 29 March 2026 00:49:20 +0000 (0:00:00.803) 0:01:07.724 ********** 2026-03-29 00:59:23.861265 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861272 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861278 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861285 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.861292 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.861302 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.861309 | orchestrator | 2026-03-29 00:59:23.861316 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:59:23.861323 | orchestrator | Sunday 29 March 2026 00:49:21 +0000 (0:00:01.159) 0:01:08.884 ********** 2026-03-29 00:59:23.861329 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861336 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861347 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861353 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861360 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861370 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861378 | orchestrator | 2026-03-29 00:59:23.861384 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:59:23.861391 | orchestrator | Sunday 29 March 2026 00:49:22 +0000 (0:00:00.576) 0:01:09.460 ********** 2026-03-29 00:59:23.861398 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861404 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861411 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861418 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861424 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861431 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861438 | orchestrator | 2026-03-29 00:59:23.861445 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:59:23.861463 | orchestrator | Sunday 29 March 2026 00:49:23 +0000 (0:00:01.061) 0:01:10.522 ********** 2026-03-29 00:59:23.861470 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861476 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861483 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.861490 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861497 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.861503 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.861510 | orchestrator | 2026-03-29 00:59:23.861517 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:59:23.861523 | orchestrator | Sunday 29 March 2026 00:49:24 +0000 (0:00:01.024) 0:01:11.547 ********** 2026-03-29 00:59:23.861530 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861537 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861543 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861550 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.861556 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.861563 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.861570 | orchestrator | 2026-03-29 00:59:23.861576 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:59:23.861583 | orchestrator | Sunday 29 March 2026 00:49:25 +0000 (0:00:01.164) 0:01:12.712 ********** 2026-03-29 00:59:23.861590 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861597 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861603 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861610 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861616 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861623 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861630 | orchestrator | 2026-03-29 00:59:23.861637 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:59:23.861643 | orchestrator | Sunday 29 March 2026 00:49:25 +0000 (0:00:00.575) 0:01:13.287 ********** 2026-03-29 00:59:23.861650 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861657 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861663 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861670 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.861676 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.861683 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.861690 | orchestrator | 2026-03-29 00:59:23.861697 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:59:23.861703 | orchestrator | Sunday 29 March 2026 00:49:26 +0000 (0:00:00.749) 0:01:14.037 ********** 2026-03-29 00:59:23.861710 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861717 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861724 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861730 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861737 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861744 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861755 | orchestrator | 2026-03-29 00:59:23.861761 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:59:23.861768 | orchestrator | Sunday 29 March 2026 00:49:27 +0000 (0:00:00.522) 0:01:14.560 ********** 2026-03-29 00:59:23.861775 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861781 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861788 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861795 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861801 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861808 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861815 | orchestrator | 2026-03-29 00:59:23.861821 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:59:23.861828 | orchestrator | Sunday 29 March 2026 00:49:27 +0000 (0:00:00.720) 0:01:15.280 ********** 2026-03-29 00:59:23.861835 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.861841 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.861848 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.861855 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861861 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861868 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861875 | orchestrator | 2026-03-29 00:59:23.861882 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:59:23.861888 | orchestrator | Sunday 29 March 2026 00:49:28 +0000 (0:00:00.535) 0:01:15.815 ********** 2026-03-29 00:59:23.861895 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861902 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861908 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861915 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861921 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861928 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.861935 | orchestrator | 2026-03-29 00:59:23.861941 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:59:23.861948 | orchestrator | Sunday 29 March 2026 00:49:29 +0000 (0:00:00.804) 0:01:16.619 ********** 2026-03-29 00:59:23.861955 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.861962 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.861968 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.861975 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.861986 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.861993 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.862000 | orchestrator | 2026-03-29 00:59:23.862006 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:59:23.862049 | orchestrator | Sunday 29 March 2026 00:49:29 +0000 (0:00:00.699) 0:01:17.319 ********** 2026-03-29 00:59:23.862058 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.862065 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.862072 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.862078 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.862089 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.862095 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.862102 | orchestrator | 2026-03-29 00:59:23.862109 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:59:23.862116 | orchestrator | Sunday 29 March 2026 00:49:30 +0000 (0:00:00.976) 0:01:18.295 ********** 2026-03-29 00:59:23.862122 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.862129 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.862136 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.862142 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.862149 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.862156 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.862162 | orchestrator | 2026-03-29 00:59:23.862169 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:59:23.862176 | orchestrator | Sunday 29 March 2026 00:49:31 +0000 (0:00:00.986) 0:01:19.282 ********** 2026-03-29 00:59:23.862190 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.862197 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.862203 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.862210 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.862217 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.862223 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.862230 | orchestrator | 2026-03-29 00:59:23.862236 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-29 00:59:23.862243 | orchestrator | Sunday 29 March 2026 00:49:33 +0000 (0:00:01.741) 0:01:21.024 ********** 2026-03-29 00:59:23.862250 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.862257 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.862263 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.862270 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.862277 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.862283 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.862290 | orchestrator | 2026-03-29 00:59:23.862297 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-29 00:59:23.862303 | orchestrator | Sunday 29 March 2026 00:49:35 +0000 (0:00:01.863) 0:01:22.888 ********** 2026-03-29 00:59:23.862310 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.862317 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.862323 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.862330 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.862336 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.862343 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.862350 | orchestrator | 2026-03-29 00:59:23.862357 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-29 00:59:23.862364 | orchestrator | Sunday 29 March 2026 00:49:39 +0000 (0:00:03.674) 0:01:26.562 ********** 2026-03-29 00:59:23.862370 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.862377 | orchestrator | 2026-03-29 00:59:23.862384 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-29 00:59:23.862390 | orchestrator | Sunday 29 March 2026 00:49:40 +0000 (0:00:01.107) 0:01:27.670 ********** 2026-03-29 00:59:23.862397 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.862404 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.862411 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.862417 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.862424 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.862431 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.862437 | orchestrator | 2026-03-29 00:59:23.862444 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-29 00:59:23.862462 | orchestrator | Sunday 29 March 2026 00:49:40 +0000 (0:00:00.626) 0:01:28.296 ********** 2026-03-29 00:59:23.862469 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.862475 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.862482 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.862488 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.862495 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.862502 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.862508 | orchestrator | 2026-03-29 00:59:23.862515 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-29 00:59:23.862522 | orchestrator | Sunday 29 March 2026 00:49:41 +0000 (0:00:00.961) 0:01:29.258 ********** 2026-03-29 00:59:23.862528 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:59:23.862535 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:59:23.862542 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:59:23.862548 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:59:23.862559 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:59:23.862565 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-29 00:59:23.862572 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:59:23.862579 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:59:23.862586 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:59:23.862593 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:59:23.862612 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:59:23.862619 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-29 00:59:23.862626 | orchestrator | 2026-03-29 00:59:23.862633 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-29 00:59:23.862639 | orchestrator | Sunday 29 March 2026 00:49:43 +0000 (0:00:01.561) 0:01:30.819 ********** 2026-03-29 00:59:23.862646 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.862656 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.862663 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.862670 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.862677 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.862683 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.862690 | orchestrator | 2026-03-29 00:59:23.862697 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-29 00:59:23.862704 | orchestrator | Sunday 29 March 2026 00:49:44 +0000 (0:00:01.451) 0:01:32.271 ********** 2026-03-29 00:59:23.862710 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.862717 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.862723 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.862730 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.862737 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.862743 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.862750 | orchestrator | 2026-03-29 00:59:23.862757 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-29 00:59:23.862764 | orchestrator | Sunday 29 March 2026 00:49:45 +0000 (0:00:00.919) 0:01:33.190 ********** 2026-03-29 00:59:23.862770 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.862777 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.862784 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.862790 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.862797 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.862804 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.862810 | orchestrator | 2026-03-29 00:59:23.862817 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-29 00:59:23.862824 | orchestrator | Sunday 29 March 2026 00:49:46 +0000 (0:00:00.779) 0:01:33.972 ********** 2026-03-29 00:59:23.862831 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.862837 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.862844 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.862854 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.862865 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.862876 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.862917 | orchestrator | 2026-03-29 00:59:23.862928 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-29 00:59:23.862940 | orchestrator | Sunday 29 March 2026 00:49:47 +0000 (0:00:00.969) 0:01:34.941 ********** 2026-03-29 00:59:23.862951 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.862962 | orchestrator | 2026-03-29 00:59:23.862969 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-29 00:59:23.862981 | orchestrator | Sunday 29 March 2026 00:49:49 +0000 (0:00:01.420) 0:01:36.362 ********** 2026-03-29 00:59:23.862988 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.862995 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.863002 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.863008 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.863015 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.863021 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.863028 | orchestrator | 2026-03-29 00:59:23.863035 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-29 00:59:23.863042 | orchestrator | Sunday 29 March 2026 00:50:58 +0000 (0:01:09.257) 0:02:45.619 ********** 2026-03-29 00:59:23.863048 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:59:23.863055 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:59:23.863062 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:59:23.863068 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863075 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:59:23.863081 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:59:23.863088 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:59:23.863094 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863101 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:59:23.863108 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:59:23.863114 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:59:23.863121 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863128 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:59:23.863134 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:59:23.863141 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:59:23.863147 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863154 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:59:23.863161 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:59:23.863168 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:59:23.863175 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863187 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-29 00:59:23.863194 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-29 00:59:23.863201 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-29 00:59:23.863207 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863214 | orchestrator | 2026-03-29 00:59:23.863221 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-29 00:59:23.863228 | orchestrator | Sunday 29 March 2026 00:50:59 +0000 (0:00:00.732) 0:02:46.352 ********** 2026-03-29 00:59:23.863238 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863245 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863252 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863258 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863265 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863272 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863278 | orchestrator | 2026-03-29 00:59:23.863285 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-29 00:59:23.863292 | orchestrator | Sunday 29 March 2026 00:50:59 +0000 (0:00:00.685) 0:02:47.037 ********** 2026-03-29 00:59:23.863306 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863312 | orchestrator | 2026-03-29 00:59:23.863319 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-29 00:59:23.863326 | orchestrator | Sunday 29 March 2026 00:50:59 +0000 (0:00:00.157) 0:02:47.195 ********** 2026-03-29 00:59:23.863332 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863339 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863345 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863352 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863359 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863365 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863372 | orchestrator | 2026-03-29 00:59:23.863379 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-29 00:59:23.863385 | orchestrator | Sunday 29 March 2026 00:51:00 +0000 (0:00:00.774) 0:02:47.969 ********** 2026-03-29 00:59:23.863392 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863398 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863405 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863412 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863419 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863425 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863432 | orchestrator | 2026-03-29 00:59:23.863438 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-29 00:59:23.863445 | orchestrator | Sunday 29 March 2026 00:51:01 +0000 (0:00:00.840) 0:02:48.810 ********** 2026-03-29 00:59:23.863497 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863509 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863516 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863523 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863530 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863536 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863543 | orchestrator | 2026-03-29 00:59:23.863550 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-29 00:59:23.863557 | orchestrator | Sunday 29 March 2026 00:51:02 +0000 (0:00:00.891) 0:02:49.701 ********** 2026-03-29 00:59:23.863563 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.863570 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.863577 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.863584 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.863590 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.863597 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.863603 | orchestrator | 2026-03-29 00:59:23.863610 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-29 00:59:23.863617 | orchestrator | Sunday 29 March 2026 00:51:04 +0000 (0:00:02.613) 0:02:52.315 ********** 2026-03-29 00:59:23.863623 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.863630 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.863636 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.863643 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.863649 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.863656 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.863662 | orchestrator | 2026-03-29 00:59:23.863669 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-29 00:59:23.863676 | orchestrator | Sunday 29 March 2026 00:51:05 +0000 (0:00:00.596) 0:02:52.912 ********** 2026-03-29 00:59:23.863683 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.863690 | orchestrator | 2026-03-29 00:59:23.863696 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-29 00:59:23.863703 | orchestrator | Sunday 29 March 2026 00:51:06 +0000 (0:00:01.160) 0:02:54.072 ********** 2026-03-29 00:59:23.863710 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863717 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863729 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863736 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863743 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863750 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863757 | orchestrator | 2026-03-29 00:59:23.863763 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-29 00:59:23.863770 | orchestrator | Sunday 29 March 2026 00:51:07 +0000 (0:00:00.901) 0:02:54.973 ********** 2026-03-29 00:59:23.863777 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863783 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863790 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863797 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863804 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863810 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863817 | orchestrator | 2026-03-29 00:59:23.863824 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-29 00:59:23.863830 | orchestrator | Sunday 29 March 2026 00:51:08 +0000 (0:00:00.785) 0:02:55.759 ********** 2026-03-29 00:59:23.863837 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863844 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863855 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863863 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863869 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863876 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863883 | orchestrator | 2026-03-29 00:59:23.863889 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-29 00:59:23.863896 | orchestrator | Sunday 29 March 2026 00:51:09 +0000 (0:00:00.772) 0:02:56.531 ********** 2026-03-29 00:59:23.863903 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863909 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863916 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863926 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863933 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863940 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.863946 | orchestrator | 2026-03-29 00:59:23.863953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-29 00:59:23.863960 | orchestrator | Sunday 29 March 2026 00:51:09 +0000 (0:00:00.541) 0:02:57.073 ********** 2026-03-29 00:59:23.863966 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.863973 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.863979 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.863986 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.863993 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.863999 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.864006 | orchestrator | 2026-03-29 00:59:23.864013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-29 00:59:23.864019 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:00.642) 0:02:57.715 ********** 2026-03-29 00:59:23.864025 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.864031 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.864037 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.864044 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.864050 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.864056 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.864062 | orchestrator | 2026-03-29 00:59:23.864069 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-29 00:59:23.864075 | orchestrator | Sunday 29 March 2026 00:51:10 +0000 (0:00:00.585) 0:02:58.301 ********** 2026-03-29 00:59:23.864081 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.864087 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.864093 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.864099 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.864109 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.864115 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.864122 | orchestrator | 2026-03-29 00:59:23.864128 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-29 00:59:23.864134 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:01.081) 0:02:59.382 ********** 2026-03-29 00:59:23.864140 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.864146 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.864152 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.864159 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.864165 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.864171 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.864177 | orchestrator | 2026-03-29 00:59:23.864183 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-29 00:59:23.864189 | orchestrator | Sunday 29 March 2026 00:51:12 +0000 (0:00:00.662) 0:03:00.044 ********** 2026-03-29 00:59:23.864196 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.864202 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.864208 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.864214 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.864221 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.864227 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.864233 | orchestrator | 2026-03-29 00:59:23.864239 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-29 00:59:23.864245 | orchestrator | Sunday 29 March 2026 00:51:13 +0000 (0:00:01.144) 0:03:01.189 ********** 2026-03-29 00:59:23.864251 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.864258 | orchestrator | 2026-03-29 00:59:23.864264 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-29 00:59:23.864270 | orchestrator | Sunday 29 March 2026 00:51:14 +0000 (0:00:01.027) 0:03:02.216 ********** 2026-03-29 00:59:23.864276 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-29 00:59:23.864282 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-29 00:59:23.864288 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-29 00:59:23.864295 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-29 00:59:23.864301 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-29 00:59:23.864307 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-29 00:59:23.864313 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-29 00:59:23.864319 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-29 00:59:23.864325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-29 00:59:23.864331 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-29 00:59:23.864337 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-29 00:59:23.864343 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-29 00:59:23.864350 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-29 00:59:23.864356 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-29 00:59:23.864362 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-29 00:59:23.864368 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-29 00:59:23.864374 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-29 00:59:23.864380 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-29 00:59:23.864390 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-29 00:59:23.864397 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-29 00:59:23.864403 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-29 00:59:23.864409 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-29 00:59:23.864422 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-29 00:59:23.864428 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-29 00:59:23.864434 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-29 00:59:23.864443 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-29 00:59:23.864466 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-29 00:59:23.864474 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-29 00:59:23.864481 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-29 00:59:23.864487 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-29 00:59:23.864493 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-29 00:59:23.864499 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-29 00:59:23.864506 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-29 00:59:23.864512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-29 00:59:23.864518 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-29 00:59:23.864524 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-29 00:59:23.864530 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-29 00:59:23.864536 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-29 00:59:23.864542 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:59:23.864549 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-29 00:59:23.864555 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-29 00:59:23.864561 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-29 00:59:23.864567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:59:23.864573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:59:23.864579 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:59:23.864585 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:59:23.864591 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:59:23.864598 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-29 00:59:23.864604 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:59:23.864610 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:59:23.864616 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:59:23.864622 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:59:23.864628 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:59:23.864634 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:59:23.864640 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-29 00:59:23.864646 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:59:23.864652 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:59:23.864659 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:59:23.864665 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:59:23.864671 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:59:23.864677 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:59:23.864683 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-29 00:59:23.864689 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:59:23.864695 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:59:23.864706 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:59:23.864712 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:59:23.864718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-29 00:59:23.864724 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:59:23.864730 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:59:23.864736 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:59:23.864742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:59:23.864749 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:59:23.864755 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:59:23.864761 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-29 00:59:23.864767 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:59:23.864773 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-29 00:59:23.864783 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:59:23.864789 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:59:23.864796 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:59:23.864802 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-29 00:59:23.864808 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-29 00:59:23.864814 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:59:23.864824 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-29 00:59:23.864830 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:59:23.864836 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:59:23.864843 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-29 00:59:23.864849 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-29 00:59:23.864855 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-29 00:59:23.864861 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-29 00:59:23.864867 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-29 00:59:23.864874 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-29 00:59:23.864880 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-29 00:59:23.864886 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-29 00:59:23.864892 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-29 00:59:23.864898 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-29 00:59:23.864905 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-29 00:59:23.864911 | orchestrator | 2026-03-29 00:59:23.864917 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-29 00:59:23.864923 | orchestrator | Sunday 29 March 2026 00:51:22 +0000 (0:00:07.749) 0:03:09.966 ********** 2026-03-29 00:59:23.864929 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.864936 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.864942 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.864949 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.864955 | orchestrator | 2026-03-29 00:59:23.864961 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-29 00:59:23.864967 | orchestrator | Sunday 29 March 2026 00:51:23 +0000 (0:00:00.825) 0:03:10.792 ********** 2026-03-29 00:59:23.864977 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.864984 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.864990 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.864996 | orchestrator | 2026-03-29 00:59:23.865002 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-29 00:59:23.865008 | orchestrator | Sunday 29 March 2026 00:51:24 +0000 (0:00:01.016) 0:03:11.808 ********** 2026-03-29 00:59:23.865015 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.865021 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.865027 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.865033 | orchestrator | 2026-03-29 00:59:23.865040 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-29 00:59:23.865046 | orchestrator | Sunday 29 March 2026 00:51:25 +0000 (0:00:01.282) 0:03:13.090 ********** 2026-03-29 00:59:23.865052 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.865058 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.865064 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.865071 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865077 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865083 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865089 | orchestrator | 2026-03-29 00:59:23.865095 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-29 00:59:23.865101 | orchestrator | Sunday 29 March 2026 00:51:26 +0000 (0:00:00.581) 0:03:13.672 ********** 2026-03-29 00:59:23.865107 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.865114 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.865120 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.865126 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865132 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865138 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865144 | orchestrator | 2026-03-29 00:59:23.865151 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-29 00:59:23.865157 | orchestrator | Sunday 29 March 2026 00:51:27 +0000 (0:00:00.830) 0:03:14.502 ********** 2026-03-29 00:59:23.865163 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865169 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865175 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865181 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865188 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865194 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865200 | orchestrator | 2026-03-29 00:59:23.865210 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-29 00:59:23.865216 | orchestrator | Sunday 29 March 2026 00:51:27 +0000 (0:00:00.508) 0:03:15.011 ********** 2026-03-29 00:59:23.865222 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865229 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865235 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865241 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865247 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865253 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865259 | orchestrator | 2026-03-29 00:59:23.865265 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-29 00:59:23.865275 | orchestrator | Sunday 29 March 2026 00:51:28 +0000 (0:00:00.661) 0:03:15.673 ********** 2026-03-29 00:59:23.865287 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865293 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865300 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865306 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865312 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865318 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865324 | orchestrator | 2026-03-29 00:59:23.865330 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-29 00:59:23.865337 | orchestrator | Sunday 29 March 2026 00:51:28 +0000 (0:00:00.599) 0:03:16.272 ********** 2026-03-29 00:59:23.865343 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865349 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865355 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865362 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865368 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865374 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865380 | orchestrator | 2026-03-29 00:59:23.865386 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-29 00:59:23.865393 | orchestrator | Sunday 29 March 2026 00:51:29 +0000 (0:00:00.695) 0:03:16.968 ********** 2026-03-29 00:59:23.865399 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865405 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865411 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865417 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865424 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865430 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865436 | orchestrator | 2026-03-29 00:59:23.865444 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-29 00:59:23.865471 | orchestrator | Sunday 29 March 2026 00:51:30 +0000 (0:00:00.502) 0:03:17.470 ********** 2026-03-29 00:59:23.865490 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865499 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865509 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865518 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865527 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865536 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865546 | orchestrator | 2026-03-29 00:59:23.865555 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-29 00:59:23.865563 | orchestrator | Sunday 29 March 2026 00:51:30 +0000 (0:00:00.755) 0:03:18.226 ********** 2026-03-29 00:59:23.865572 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865581 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865591 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865600 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.865610 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.865620 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.865632 | orchestrator | 2026-03-29 00:59:23.865638 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-29 00:59:23.865644 | orchestrator | Sunday 29 March 2026 00:51:34 +0000 (0:00:03.780) 0:03:22.007 ********** 2026-03-29 00:59:23.865651 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.865657 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.865663 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.865669 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865675 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865682 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865688 | orchestrator | 2026-03-29 00:59:23.865694 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-29 00:59:23.865700 | orchestrator | Sunday 29 March 2026 00:51:35 +0000 (0:00:00.911) 0:03:22.919 ********** 2026-03-29 00:59:23.865707 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.865713 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.865725 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.865731 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865738 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865744 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865750 | orchestrator | 2026-03-29 00:59:23.865756 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-29 00:59:23.865762 | orchestrator | Sunday 29 March 2026 00:51:36 +0000 (0:00:00.788) 0:03:23.708 ********** 2026-03-29 00:59:23.865769 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865775 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865781 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865787 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865793 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865800 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865806 | orchestrator | 2026-03-29 00:59:23.865812 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-29 00:59:23.865818 | orchestrator | Sunday 29 March 2026 00:51:37 +0000 (0:00:00.973) 0:03:24.681 ********** 2026-03-29 00:59:23.865825 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.865831 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.865837 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.865844 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865855 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865861 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865868 | orchestrator | 2026-03-29 00:59:23.865874 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-29 00:59:23.865883 | orchestrator | Sunday 29 March 2026 00:51:38 +0000 (0:00:00.949) 0:03:25.630 ********** 2026-03-29 00:59:23.865900 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-29 00:59:23.865914 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-29 00:59:23.865925 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-29 00:59:23.865935 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-29 00:59:23.865941 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.865947 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-29 00:59:23.865954 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.865961 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-29 00:59:23.865971 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.865977 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.865983 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.865989 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.865996 | orchestrator | 2026-03-29 00:59:23.866002 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-29 00:59:23.866008 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:00.750) 0:03:26.380 ********** 2026-03-29 00:59:23.866040 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866047 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.866053 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.866059 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866065 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866071 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866078 | orchestrator | 2026-03-29 00:59:23.866084 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-29 00:59:23.866090 | orchestrator | Sunday 29 March 2026 00:51:39 +0000 (0:00:00.523) 0:03:26.904 ********** 2026-03-29 00:59:23.866096 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866103 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.866109 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.866115 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866121 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866127 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866133 | orchestrator | 2026-03-29 00:59:23.866140 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 00:59:23.866146 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.686) 0:03:27.591 ********** 2026-03-29 00:59:23.866153 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866159 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.866165 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.866171 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866178 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866184 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866190 | orchestrator | 2026-03-29 00:59:23.866196 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 00:59:23.866203 | orchestrator | Sunday 29 March 2026 00:51:40 +0000 (0:00:00.599) 0:03:28.190 ********** 2026-03-29 00:59:23.866209 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866215 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.866221 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.866227 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866233 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866240 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866246 | orchestrator | 2026-03-29 00:59:23.866252 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 00:59:23.866262 | orchestrator | Sunday 29 March 2026 00:51:41 +0000 (0:00:00.643) 0:03:28.834 ********** 2026-03-29 00:59:23.866269 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866275 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.866281 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.866287 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866294 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866300 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866306 | orchestrator | 2026-03-29 00:59:23.866312 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 00:59:23.866322 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:00.533) 0:03:29.368 ********** 2026-03-29 00:59:23.866329 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.866339 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.866345 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866351 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.866357 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866364 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866370 | orchestrator | 2026-03-29 00:59:23.866376 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 00:59:23.866382 | orchestrator | Sunday 29 March 2026 00:51:42 +0000 (0:00:00.651) 0:03:30.019 ********** 2026-03-29 00:59:23.866388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.866395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.866401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.866407 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866413 | orchestrator | 2026-03-29 00:59:23.866419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 00:59:23.866425 | orchestrator | Sunday 29 March 2026 00:51:43 +0000 (0:00:00.380) 0:03:30.399 ********** 2026-03-29 00:59:23.866432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.866438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.866444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.866468 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866480 | orchestrator | 2026-03-29 00:59:23.866496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 00:59:23.866506 | orchestrator | Sunday 29 March 2026 00:51:43 +0000 (0:00:00.366) 0:03:30.766 ********** 2026-03-29 00:59:23.866515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.866525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.866535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.866544 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866552 | orchestrator | 2026-03-29 00:59:23.866562 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 00:59:23.866573 | orchestrator | Sunday 29 March 2026 00:51:43 +0000 (0:00:00.344) 0:03:31.111 ********** 2026-03-29 00:59:23.866583 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.866594 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.866601 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.866607 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866613 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866619 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866626 | orchestrator | 2026-03-29 00:59:23.866635 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 00:59:23.866648 | orchestrator | Sunday 29 March 2026 00:51:44 +0000 (0:00:00.461) 0:03:31.572 ********** 2026-03-29 00:59:23.866663 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 00:59:23.866672 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 00:59:23.866682 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 00:59:23.866692 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-29 00:59:23.866701 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.866710 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-29 00:59:23.866720 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.866729 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-29 00:59:23.866740 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.866750 | orchestrator | 2026-03-29 00:59:23.866760 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-29 00:59:23.866771 | orchestrator | Sunday 29 March 2026 00:51:45 +0000 (0:00:01.440) 0:03:33.013 ********** 2026-03-29 00:59:23.866781 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.866791 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.866800 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.866819 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.866830 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.866841 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.866850 | orchestrator | 2026-03-29 00:59:23.866856 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:59:23.866863 | orchestrator | Sunday 29 March 2026 00:51:48 +0000 (0:00:02.391) 0:03:35.404 ********** 2026-03-29 00:59:23.866869 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.866875 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.866882 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.866888 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.866894 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.866900 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.866906 | orchestrator | 2026-03-29 00:59:23.866913 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-29 00:59:23.866919 | orchestrator | Sunday 29 March 2026 00:51:49 +0000 (0:00:01.175) 0:03:36.580 ********** 2026-03-29 00:59:23.866925 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.866931 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.866938 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.866944 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.866950 | orchestrator | 2026-03-29 00:59:23.866957 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-29 00:59:23.866976 | orchestrator | Sunday 29 March 2026 00:51:50 +0000 (0:00:00.954) 0:03:37.535 ********** 2026-03-29 00:59:23.866983 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.866989 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.866996 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.867002 | orchestrator | 2026-03-29 00:59:23.867008 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-29 00:59:23.867015 | orchestrator | Sunday 29 March 2026 00:51:50 +0000 (0:00:00.279) 0:03:37.814 ********** 2026-03-29 00:59:23.867021 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.867027 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.867033 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.867040 | orchestrator | 2026-03-29 00:59:23.867050 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-29 00:59:23.867057 | orchestrator | Sunday 29 March 2026 00:51:51 +0000 (0:00:01.437) 0:03:39.252 ********** 2026-03-29 00:59:23.867063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:59:23.867069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:59:23.867075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:59:23.867082 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.867088 | orchestrator | 2026-03-29 00:59:23.867094 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-29 00:59:23.867100 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:00.539) 0:03:39.791 ********** 2026-03-29 00:59:23.867107 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.867113 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.867119 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.867126 | orchestrator | 2026-03-29 00:59:23.867132 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-29 00:59:23.867138 | orchestrator | Sunday 29 March 2026 00:51:52 +0000 (0:00:00.309) 0:03:40.100 ********** 2026-03-29 00:59:23.867144 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.867151 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.867157 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.867163 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.867170 | orchestrator | 2026-03-29 00:59:23.867176 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-29 00:59:23.867187 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:00.862) 0:03:40.962 ********** 2026-03-29 00:59:23.867193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.867200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.867206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.867212 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867219 | orchestrator | 2026-03-29 00:59:23.867225 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-29 00:59:23.867232 | orchestrator | Sunday 29 March 2026 00:51:53 +0000 (0:00:00.367) 0:03:41.330 ********** 2026-03-29 00:59:23.867238 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867244 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.867250 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.867257 | orchestrator | 2026-03-29 00:59:23.867263 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-29 00:59:23.867269 | orchestrator | Sunday 29 March 2026 00:51:54 +0000 (0:00:00.277) 0:03:41.608 ********** 2026-03-29 00:59:23.867275 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867281 | orchestrator | 2026-03-29 00:59:23.867288 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-29 00:59:23.867294 | orchestrator | Sunday 29 March 2026 00:51:54 +0000 (0:00:00.239) 0:03:41.847 ********** 2026-03-29 00:59:23.867300 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867306 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.867313 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.867319 | orchestrator | 2026-03-29 00:59:23.867326 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-29 00:59:23.867332 | orchestrator | Sunday 29 March 2026 00:51:54 +0000 (0:00:00.316) 0:03:42.163 ********** 2026-03-29 00:59:23.867338 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867344 | orchestrator | 2026-03-29 00:59:23.867350 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-29 00:59:23.867357 | orchestrator | Sunday 29 March 2026 00:51:55 +0000 (0:00:00.237) 0:03:42.400 ********** 2026-03-29 00:59:23.867363 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867369 | orchestrator | 2026-03-29 00:59:23.867375 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-29 00:59:23.867381 | orchestrator | Sunday 29 March 2026 00:51:55 +0000 (0:00:00.210) 0:03:42.610 ********** 2026-03-29 00:59:23.867388 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867394 | orchestrator | 2026-03-29 00:59:23.867400 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-29 00:59:23.867406 | orchestrator | Sunday 29 March 2026 00:51:55 +0000 (0:00:00.131) 0:03:42.742 ********** 2026-03-29 00:59:23.867413 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867419 | orchestrator | 2026-03-29 00:59:23.867425 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-29 00:59:23.867431 | orchestrator | Sunday 29 March 2026 00:51:56 +0000 (0:00:00.789) 0:03:43.532 ********** 2026-03-29 00:59:23.867438 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867444 | orchestrator | 2026-03-29 00:59:23.867467 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-29 00:59:23.867477 | orchestrator | Sunday 29 March 2026 00:51:56 +0000 (0:00:00.242) 0:03:43.774 ********** 2026-03-29 00:59:23.867487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.867498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.867509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.867519 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867528 | orchestrator | 2026-03-29 00:59:23.867534 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-29 00:59:23.867545 | orchestrator | Sunday 29 March 2026 00:51:56 +0000 (0:00:00.431) 0:03:44.205 ********** 2026-03-29 00:59:23.867556 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867563 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.867569 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.867576 | orchestrator | 2026-03-29 00:59:23.867582 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-29 00:59:23.867588 | orchestrator | Sunday 29 March 2026 00:51:57 +0000 (0:00:00.383) 0:03:44.589 ********** 2026-03-29 00:59:23.867595 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867601 | orchestrator | 2026-03-29 00:59:23.867610 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-29 00:59:23.867617 | orchestrator | Sunday 29 March 2026 00:51:57 +0000 (0:00:00.227) 0:03:44.817 ********** 2026-03-29 00:59:23.867623 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867630 | orchestrator | 2026-03-29 00:59:23.867636 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-29 00:59:23.867642 | orchestrator | Sunday 29 March 2026 00:51:57 +0000 (0:00:00.217) 0:03:45.034 ********** 2026-03-29 00:59:23.867649 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.867655 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.867661 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.867667 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.867673 | orchestrator | 2026-03-29 00:59:23.867680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-29 00:59:23.867686 | orchestrator | Sunday 29 March 2026 00:51:58 +0000 (0:00:01.080) 0:03:46.115 ********** 2026-03-29 00:59:23.867692 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.867698 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.867705 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.867711 | orchestrator | 2026-03-29 00:59:23.867717 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-29 00:59:23.867723 | orchestrator | Sunday 29 March 2026 00:51:59 +0000 (0:00:00.345) 0:03:46.460 ********** 2026-03-29 00:59:23.867730 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.867736 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.867742 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.867748 | orchestrator | 2026-03-29 00:59:23.867755 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-29 00:59:23.867761 | orchestrator | Sunday 29 March 2026 00:52:00 +0000 (0:00:01.428) 0:03:47.889 ********** 2026-03-29 00:59:23.867767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.867773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.867780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.867786 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867792 | orchestrator | 2026-03-29 00:59:23.867799 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-29 00:59:23.867805 | orchestrator | Sunday 29 March 2026 00:52:01 +0000 (0:00:00.832) 0:03:48.721 ********** 2026-03-29 00:59:23.867811 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.867818 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.867824 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.867830 | orchestrator | 2026-03-29 00:59:23.867837 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-29 00:59:23.867843 | orchestrator | Sunday 29 March 2026 00:52:01 +0000 (0:00:00.445) 0:03:49.166 ********** 2026-03-29 00:59:23.867849 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.867855 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.867862 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.867868 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.867874 | orchestrator | 2026-03-29 00:59:23.867880 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-29 00:59:23.867891 | orchestrator | Sunday 29 March 2026 00:52:02 +0000 (0:00:00.808) 0:03:49.975 ********** 2026-03-29 00:59:23.867897 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.867903 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.867910 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.867916 | orchestrator | 2026-03-29 00:59:23.867922 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-29 00:59:23.867928 | orchestrator | Sunday 29 March 2026 00:52:03 +0000 (0:00:00.479) 0:03:50.455 ********** 2026-03-29 00:59:23.867934 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.867941 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.867947 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.867953 | orchestrator | 2026-03-29 00:59:23.867959 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-29 00:59:23.867966 | orchestrator | Sunday 29 March 2026 00:52:04 +0000 (0:00:01.362) 0:03:51.817 ********** 2026-03-29 00:59:23.867972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.867978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.867985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.867991 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.867997 | orchestrator | 2026-03-29 00:59:23.868004 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-29 00:59:23.868010 | orchestrator | Sunday 29 March 2026 00:52:05 +0000 (0:00:00.601) 0:03:52.419 ********** 2026-03-29 00:59:23.868016 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.868022 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.868029 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.868035 | orchestrator | 2026-03-29 00:59:23.868041 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-29 00:59:23.868047 | orchestrator | Sunday 29 March 2026 00:52:05 +0000 (0:00:00.473) 0:03:52.892 ********** 2026-03-29 00:59:23.868054 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.868060 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.868066 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.868072 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868078 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868089 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868095 | orchestrator | 2026-03-29 00:59:23.868103 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-29 00:59:23.868116 | orchestrator | Sunday 29 March 2026 00:52:06 +0000 (0:00:00.816) 0:03:53.709 ********** 2026-03-29 00:59:23.868131 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.868141 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.868151 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.868166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.868177 | orchestrator | 2026-03-29 00:59:23.868188 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-29 00:59:23.868199 | orchestrator | Sunday 29 March 2026 00:52:07 +0000 (0:00:00.866) 0:03:54.575 ********** 2026-03-29 00:59:23.868211 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.868219 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.868226 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.868232 | orchestrator | 2026-03-29 00:59:23.868238 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-29 00:59:23.868245 | orchestrator | Sunday 29 March 2026 00:52:07 +0000 (0:00:00.710) 0:03:55.286 ********** 2026-03-29 00:59:23.868251 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.868257 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.868263 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.868269 | orchestrator | 2026-03-29 00:59:23.868276 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-29 00:59:23.868287 | orchestrator | Sunday 29 March 2026 00:52:09 +0000 (0:00:01.464) 0:03:56.750 ********** 2026-03-29 00:59:23.868294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:59:23.868300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:59:23.868306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:59:23.868313 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868319 | orchestrator | 2026-03-29 00:59:23.868325 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-29 00:59:23.868331 | orchestrator | Sunday 29 March 2026 00:52:10 +0000 (0:00:00.642) 0:03:57.393 ********** 2026-03-29 00:59:23.868338 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.868344 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.868350 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.868357 | orchestrator | 2026-03-29 00:59:23.868363 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-29 00:59:23.868369 | orchestrator | 2026-03-29 00:59:23.868375 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:59:23.868382 | orchestrator | Sunday 29 March 2026 00:52:10 +0000 (0:00:00.594) 0:03:57.987 ********** 2026-03-29 00:59:23.868388 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.868394 | orchestrator | 2026-03-29 00:59:23.868400 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:59:23.868406 | orchestrator | Sunday 29 March 2026 00:52:11 +0000 (0:00:00.887) 0:03:58.875 ********** 2026-03-29 00:59:23.868413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.868419 | orchestrator | 2026-03-29 00:59:23.868425 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:59:23.868432 | orchestrator | Sunday 29 March 2026 00:52:12 +0000 (0:00:00.567) 0:03:59.443 ********** 2026-03-29 00:59:23.868438 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.868444 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.868483 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.868490 | orchestrator | 2026-03-29 00:59:23.868496 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:59:23.868502 | orchestrator | Sunday 29 March 2026 00:52:13 +0000 (0:00:00.992) 0:04:00.435 ********** 2026-03-29 00:59:23.868508 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868515 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868521 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868527 | orchestrator | 2026-03-29 00:59:23.868533 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:59:23.868538 | orchestrator | Sunday 29 March 2026 00:52:13 +0000 (0:00:00.390) 0:04:00.826 ********** 2026-03-29 00:59:23.868544 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868555 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868560 | orchestrator | 2026-03-29 00:59:23.868565 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:59:23.868571 | orchestrator | Sunday 29 March 2026 00:52:13 +0000 (0:00:00.415) 0:04:01.241 ********** 2026-03-29 00:59:23.868576 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868582 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868587 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868592 | orchestrator | 2026-03-29 00:59:23.868598 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:59:23.868603 | orchestrator | Sunday 29 March 2026 00:52:14 +0000 (0:00:00.413) 0:04:01.655 ********** 2026-03-29 00:59:23.868609 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.868615 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.868620 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.868626 | orchestrator | 2026-03-29 00:59:23.868635 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:59:23.868641 | orchestrator | Sunday 29 March 2026 00:52:15 +0000 (0:00:01.003) 0:04:02.658 ********** 2026-03-29 00:59:23.868646 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868652 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868657 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868663 | orchestrator | 2026-03-29 00:59:23.868668 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:59:23.868674 | orchestrator | Sunday 29 March 2026 00:52:15 +0000 (0:00:00.337) 0:04:02.996 ********** 2026-03-29 00:59:23.868684 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868690 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868696 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868701 | orchestrator | 2026-03-29 00:59:23.868706 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:59:23.868712 | orchestrator | Sunday 29 March 2026 00:52:16 +0000 (0:00:00.444) 0:04:03.441 ********** 2026-03-29 00:59:23.868717 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.868723 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.868728 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.868734 | orchestrator | 2026-03-29 00:59:23.868745 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:59:23.868751 | orchestrator | Sunday 29 March 2026 00:52:16 +0000 (0:00:00.741) 0:04:04.182 ********** 2026-03-29 00:59:23.868756 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.868762 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.868767 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.868773 | orchestrator | 2026-03-29 00:59:23.868778 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:59:23.868784 | orchestrator | Sunday 29 March 2026 00:52:17 +0000 (0:00:00.926) 0:04:05.109 ********** 2026-03-29 00:59:23.868789 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868795 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868800 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868806 | orchestrator | 2026-03-29 00:59:23.868811 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:59:23.868817 | orchestrator | Sunday 29 March 2026 00:52:18 +0000 (0:00:00.311) 0:04:05.421 ********** 2026-03-29 00:59:23.868822 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.868828 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.868833 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.868839 | orchestrator | 2026-03-29 00:59:23.868844 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:59:23.868850 | orchestrator | Sunday 29 March 2026 00:52:18 +0000 (0:00:00.417) 0:04:05.839 ********** 2026-03-29 00:59:23.868855 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868861 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868867 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868872 | orchestrator | 2026-03-29 00:59:23.868878 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:59:23.868883 | orchestrator | Sunday 29 March 2026 00:52:19 +0000 (0:00:00.972) 0:04:06.811 ********** 2026-03-29 00:59:23.868889 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868894 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868900 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868905 | orchestrator | 2026-03-29 00:59:23.868911 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:59:23.868917 | orchestrator | Sunday 29 March 2026 00:52:19 +0000 (0:00:00.326) 0:04:07.137 ********** 2026-03-29 00:59:23.868922 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868928 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868933 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868939 | orchestrator | 2026-03-29 00:59:23.868944 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:59:23.868954 | orchestrator | Sunday 29 March 2026 00:52:20 +0000 (0:00:01.049) 0:04:08.187 ********** 2026-03-29 00:59:23.868959 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.868964 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868970 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.868975 | orchestrator | 2026-03-29 00:59:23.868981 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:59:23.868986 | orchestrator | Sunday 29 March 2026 00:52:21 +0000 (0:00:00.559) 0:04:08.746 ********** 2026-03-29 00:59:23.868992 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.868998 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.869003 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.869008 | orchestrator | 2026-03-29 00:59:23.869014 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:59:23.869019 | orchestrator | Sunday 29 March 2026 00:52:21 +0000 (0:00:00.433) 0:04:09.180 ********** 2026-03-29 00:59:23.869025 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869030 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869036 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869041 | orchestrator | 2026-03-29 00:59:23.869047 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:59:23.869052 | orchestrator | Sunday 29 March 2026 00:52:22 +0000 (0:00:00.302) 0:04:09.482 ********** 2026-03-29 00:59:23.869058 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869063 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869069 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869074 | orchestrator | 2026-03-29 00:59:23.869080 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:59:23.869085 | orchestrator | Sunday 29 March 2026 00:52:22 +0000 (0:00:00.627) 0:04:10.109 ********** 2026-03-29 00:59:23.869091 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869096 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869102 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869107 | orchestrator | 2026-03-29 00:59:23.869113 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-29 00:59:23.869118 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.618) 0:04:10.728 ********** 2026-03-29 00:59:23.869124 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869129 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869134 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869140 | orchestrator | 2026-03-29 00:59:23.869145 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-29 00:59:23.869151 | orchestrator | Sunday 29 March 2026 00:52:23 +0000 (0:00:00.320) 0:04:11.048 ********** 2026-03-29 00:59:23.869156 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.869162 | orchestrator | 2026-03-29 00:59:23.869167 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-29 00:59:23.869173 | orchestrator | Sunday 29 March 2026 00:52:24 +0000 (0:00:00.871) 0:04:11.920 ********** 2026-03-29 00:59:23.869178 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.869184 | orchestrator | 2026-03-29 00:59:23.869192 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-29 00:59:23.869198 | orchestrator | Sunday 29 March 2026 00:52:24 +0000 (0:00:00.156) 0:04:12.076 ********** 2026-03-29 00:59:23.869204 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-29 00:59:23.869209 | orchestrator | 2026-03-29 00:59:23.869215 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-29 00:59:23.869220 | orchestrator | Sunday 29 March 2026 00:52:25 +0000 (0:00:00.792) 0:04:12.869 ********** 2026-03-29 00:59:23.869226 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869231 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869240 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869245 | orchestrator | 2026-03-29 00:59:23.869251 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-29 00:59:23.869259 | orchestrator | Sunday 29 March 2026 00:52:25 +0000 (0:00:00.340) 0:04:13.209 ********** 2026-03-29 00:59:23.869265 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869270 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869276 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869282 | orchestrator | 2026-03-29 00:59:23.869287 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-29 00:59:23.869292 | orchestrator | Sunday 29 March 2026 00:52:26 +0000 (0:00:00.575) 0:04:13.785 ********** 2026-03-29 00:59:23.869299 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.869309 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.869322 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.869333 | orchestrator | 2026-03-29 00:59:23.869341 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-29 00:59:23.869350 | orchestrator | Sunday 29 March 2026 00:52:27 +0000 (0:00:01.258) 0:04:15.043 ********** 2026-03-29 00:59:23.869359 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.869367 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.869375 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.869382 | orchestrator | 2026-03-29 00:59:23.869391 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-29 00:59:23.869399 | orchestrator | Sunday 29 March 2026 00:52:28 +0000 (0:00:00.813) 0:04:15.857 ********** 2026-03-29 00:59:23.869407 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.869416 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.869425 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.869435 | orchestrator | 2026-03-29 00:59:23.869444 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-29 00:59:23.869470 | orchestrator | Sunday 29 March 2026 00:52:29 +0000 (0:00:00.780) 0:04:16.637 ********** 2026-03-29 00:59:23.869480 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869489 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869498 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869507 | orchestrator | 2026-03-29 00:59:23.869515 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-29 00:59:23.869523 | orchestrator | Sunday 29 March 2026 00:52:29 +0000 (0:00:00.672) 0:04:17.310 ********** 2026-03-29 00:59:23.869532 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.869541 | orchestrator | 2026-03-29 00:59:23.869551 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-29 00:59:23.869560 | orchestrator | Sunday 29 March 2026 00:52:31 +0000 (0:00:01.953) 0:04:19.264 ********** 2026-03-29 00:59:23.869569 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869575 | orchestrator | 2026-03-29 00:59:23.869581 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-29 00:59:23.869586 | orchestrator | Sunday 29 March 2026 00:52:32 +0000 (0:00:00.794) 0:04:20.058 ********** 2026-03-29 00:59:23.869591 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:59:23.869597 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.869602 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.869608 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:59:23.869613 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-29 00:59:23.869618 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 00:59:23.869624 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:59:23.869629 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-29 00:59:23.869635 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 00:59:23.869640 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-29 00:59:23.869645 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-29 00:59:23.869651 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-29 00:59:23.869663 | orchestrator | 2026-03-29 00:59:23.869668 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-29 00:59:23.869673 | orchestrator | Sunday 29 March 2026 00:52:36 +0000 (0:00:03.926) 0:04:23.984 ********** 2026-03-29 00:59:23.869679 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.869684 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.869690 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.869695 | orchestrator | 2026-03-29 00:59:23.869700 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-29 00:59:23.869706 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:01.599) 0:04:25.584 ********** 2026-03-29 00:59:23.869711 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869717 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869722 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869727 | orchestrator | 2026-03-29 00:59:23.869733 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-29 00:59:23.869738 | orchestrator | Sunday 29 March 2026 00:52:38 +0000 (0:00:00.380) 0:04:25.964 ********** 2026-03-29 00:59:23.869744 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.869749 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.869754 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.869760 | orchestrator | 2026-03-29 00:59:23.869765 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-29 00:59:23.869771 | orchestrator | Sunday 29 March 2026 00:52:39 +0000 (0:00:00.454) 0:04:26.419 ********** 2026-03-29 00:59:23.869776 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.869787 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.869793 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.869798 | orchestrator | 2026-03-29 00:59:23.869804 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-29 00:59:23.869809 | orchestrator | Sunday 29 March 2026 00:52:40 +0000 (0:00:01.850) 0:04:28.269 ********** 2026-03-29 00:59:23.869815 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.869820 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.869825 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.869831 | orchestrator | 2026-03-29 00:59:23.869840 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-29 00:59:23.869846 | orchestrator | Sunday 29 March 2026 00:52:42 +0000 (0:00:01.317) 0:04:29.587 ********** 2026-03-29 00:59:23.869851 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.869857 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.869862 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.869867 | orchestrator | 2026-03-29 00:59:23.869873 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-29 00:59:23.869878 | orchestrator | Sunday 29 March 2026 00:52:42 +0000 (0:00:00.277) 0:04:29.865 ********** 2026-03-29 00:59:23.869884 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.869889 | orchestrator | 2026-03-29 00:59:23.869894 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-29 00:59:23.869900 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.645) 0:04:30.510 ********** 2026-03-29 00:59:23.869905 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.869910 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.869916 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.869921 | orchestrator | 2026-03-29 00:59:23.869927 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-29 00:59:23.869932 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.267) 0:04:30.778 ********** 2026-03-29 00:59:23.869938 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.869943 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.869948 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.869954 | orchestrator | 2026-03-29 00:59:23.869959 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-29 00:59:23.869968 | orchestrator | Sunday 29 March 2026 00:52:43 +0000 (0:00:00.256) 0:04:31.035 ********** 2026-03-29 00:59:23.869973 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.869979 | orchestrator | 2026-03-29 00:59:23.869984 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-29 00:59:23.869990 | orchestrator | Sunday 29 March 2026 00:52:44 +0000 (0:00:00.612) 0:04:31.647 ********** 2026-03-29 00:59:23.869995 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.870001 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.870006 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.870093 | orchestrator | 2026-03-29 00:59:23.870104 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-29 00:59:23.870109 | orchestrator | Sunday 29 March 2026 00:52:45 +0000 (0:00:01.558) 0:04:33.206 ********** 2026-03-29 00:59:23.870114 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.870120 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.870125 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.870131 | orchestrator | 2026-03-29 00:59:23.870136 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-29 00:59:23.870142 | orchestrator | Sunday 29 March 2026 00:52:47 +0000 (0:00:01.176) 0:04:34.382 ********** 2026-03-29 00:59:23.870148 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.870153 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.870159 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.870164 | orchestrator | 2026-03-29 00:59:23.870169 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-29 00:59:23.870175 | orchestrator | Sunday 29 March 2026 00:52:48 +0000 (0:00:01.573) 0:04:35.956 ********** 2026-03-29 00:59:23.870180 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.870186 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.870191 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.870197 | orchestrator | 2026-03-29 00:59:23.870202 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-29 00:59:23.870208 | orchestrator | Sunday 29 March 2026 00:52:51 +0000 (0:00:03.119) 0:04:39.075 ********** 2026-03-29 00:59:23.870213 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.870219 | orchestrator | 2026-03-29 00:59:23.870224 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-29 00:59:23.870230 | orchestrator | Sunday 29 March 2026 00:52:52 +0000 (0:00:00.534) 0:04:39.609 ********** 2026-03-29 00:59:23.870236 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-29 00:59:23.870241 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.870247 | orchestrator | 2026-03-29 00:59:23.870252 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-29 00:59:23.870258 | orchestrator | Sunday 29 March 2026 00:53:14 +0000 (0:00:22.058) 0:05:01.668 ********** 2026-03-29 00:59:23.870263 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.870269 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.870274 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.870279 | orchestrator | 2026-03-29 00:59:23.870285 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-29 00:59:23.870290 | orchestrator | Sunday 29 March 2026 00:53:23 +0000 (0:00:09.111) 0:05:10.779 ********** 2026-03-29 00:59:23.870296 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870301 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870307 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.870312 | orchestrator | 2026-03-29 00:59:23.870317 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-29 00:59:23.870343 | orchestrator | Sunday 29 March 2026 00:53:24 +0000 (0:00:00.586) 0:05:11.366 ********** 2026-03-29 00:59:23.870351 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__154da65fbad02d7988c2554a0915462e96612488'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-29 00:59:23.870363 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__154da65fbad02d7988c2554a0915462e96612488'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-29 00:59:23.870370 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__154da65fbad02d7988c2554a0915462e96612488'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-29 00:59:23.870376 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__154da65fbad02d7988c2554a0915462e96612488'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-29 00:59:23.870402 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__154da65fbad02d7988c2554a0915462e96612488'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-29 00:59:23.870409 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__154da65fbad02d7988c2554a0915462e96612488'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__154da65fbad02d7988c2554a0915462e96612488'}])  2026-03-29 00:59:23.870415 | orchestrator | 2026-03-29 00:59:23.870421 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:59:23.870426 | orchestrator | Sunday 29 March 2026 00:53:39 +0000 (0:00:15.430) 0:05:26.796 ********** 2026-03-29 00:59:23.870432 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870437 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870443 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.870488 | orchestrator | 2026-03-29 00:59:23.870495 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-29 00:59:23.870501 | orchestrator | Sunday 29 March 2026 00:53:39 +0000 (0:00:00.328) 0:05:27.124 ********** 2026-03-29 00:59:23.870506 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.870512 | orchestrator | 2026-03-29 00:59:23.870517 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-29 00:59:23.870523 | orchestrator | Sunday 29 March 2026 00:53:40 +0000 (0:00:00.812) 0:05:27.937 ********** 2026-03-29 00:59:23.870528 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.870534 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.870539 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.870545 | orchestrator | 2026-03-29 00:59:23.870550 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-29 00:59:23.870555 | orchestrator | Sunday 29 March 2026 00:53:40 +0000 (0:00:00.307) 0:05:28.244 ********** 2026-03-29 00:59:23.870561 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870569 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.870574 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870579 | orchestrator | 2026-03-29 00:59:23.870584 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-29 00:59:23.870588 | orchestrator | Sunday 29 March 2026 00:53:41 +0000 (0:00:00.377) 0:05:28.622 ********** 2026-03-29 00:59:23.870593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:59:23.870598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:59:23.870603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:59:23.870608 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870613 | orchestrator | 2026-03-29 00:59:23.870617 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-29 00:59:23.870622 | orchestrator | Sunday 29 March 2026 00:53:42 +0000 (0:00:01.127) 0:05:29.749 ********** 2026-03-29 00:59:23.870627 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.870632 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.870655 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.870661 | orchestrator | 2026-03-29 00:59:23.870666 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-29 00:59:23.870671 | orchestrator | 2026-03-29 00:59:23.870675 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:59:23.870680 | orchestrator | Sunday 29 March 2026 00:53:42 +0000 (0:00:00.544) 0:05:30.293 ********** 2026-03-29 00:59:23.870688 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.870693 | orchestrator | 2026-03-29 00:59:23.870698 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:59:23.870703 | orchestrator | Sunday 29 March 2026 00:53:43 +0000 (0:00:00.566) 0:05:30.859 ********** 2026-03-29 00:59:23.870708 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.870712 | orchestrator | 2026-03-29 00:59:23.870718 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:59:23.870723 | orchestrator | Sunday 29 March 2026 00:53:44 +0000 (0:00:00.748) 0:05:31.608 ********** 2026-03-29 00:59:23.870727 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.870732 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.870737 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.870742 | orchestrator | 2026-03-29 00:59:23.870747 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:59:23.870752 | orchestrator | Sunday 29 March 2026 00:53:45 +0000 (0:00:00.771) 0:05:32.379 ********** 2026-03-29 00:59:23.870757 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870762 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870766 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.870771 | orchestrator | 2026-03-29 00:59:23.870776 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:59:23.870781 | orchestrator | Sunday 29 March 2026 00:53:45 +0000 (0:00:00.296) 0:05:32.676 ********** 2026-03-29 00:59:23.870786 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870790 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870795 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.870800 | orchestrator | 2026-03-29 00:59:23.870805 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:59:23.870810 | orchestrator | Sunday 29 March 2026 00:53:45 +0000 (0:00:00.573) 0:05:33.250 ********** 2026-03-29 00:59:23.870815 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870820 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870825 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.870832 | orchestrator | 2026-03-29 00:59:23.870840 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:59:23.870853 | orchestrator | Sunday 29 March 2026 00:53:46 +0000 (0:00:00.327) 0:05:33.577 ********** 2026-03-29 00:59:23.870867 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.870876 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.870885 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.870893 | orchestrator | 2026-03-29 00:59:23.870901 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:59:23.870910 | orchestrator | Sunday 29 March 2026 00:53:46 +0000 (0:00:00.690) 0:05:34.268 ********** 2026-03-29 00:59:23.870918 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870927 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870936 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.870944 | orchestrator | 2026-03-29 00:59:23.870953 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:59:23.870962 | orchestrator | Sunday 29 March 2026 00:53:47 +0000 (0:00:00.312) 0:05:34.580 ********** 2026-03-29 00:59:23.870972 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.870981 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.870990 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871000 | orchestrator | 2026-03-29 00:59:23.871009 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:59:23.871016 | orchestrator | Sunday 29 March 2026 00:53:47 +0000 (0:00:00.543) 0:05:35.124 ********** 2026-03-29 00:59:23.871021 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871025 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871030 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871035 | orchestrator | 2026-03-29 00:59:23.871040 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:59:23.871044 | orchestrator | Sunday 29 March 2026 00:53:48 +0000 (0:00:00.698) 0:05:35.823 ********** 2026-03-29 00:59:23.871049 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871054 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871059 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871063 | orchestrator | 2026-03-29 00:59:23.871068 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:59:23.871073 | orchestrator | Sunday 29 March 2026 00:53:49 +0000 (0:00:00.671) 0:05:36.495 ********** 2026-03-29 00:59:23.871078 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871082 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871087 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871092 | orchestrator | 2026-03-29 00:59:23.871096 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:59:23.871101 | orchestrator | Sunday 29 March 2026 00:53:49 +0000 (0:00:00.313) 0:05:36.808 ********** 2026-03-29 00:59:23.871106 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871111 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871115 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871120 | orchestrator | 2026-03-29 00:59:23.871125 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:59:23.871130 | orchestrator | Sunday 29 March 2026 00:53:50 +0000 (0:00:00.561) 0:05:37.370 ********** 2026-03-29 00:59:23.871134 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871139 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871144 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871149 | orchestrator | 2026-03-29 00:59:23.871154 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:59:23.871179 | orchestrator | Sunday 29 March 2026 00:53:50 +0000 (0:00:00.321) 0:05:37.691 ********** 2026-03-29 00:59:23.871185 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871190 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871195 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871200 | orchestrator | 2026-03-29 00:59:23.871204 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:59:23.871209 | orchestrator | Sunday 29 March 2026 00:53:50 +0000 (0:00:00.422) 0:05:38.114 ********** 2026-03-29 00:59:23.871214 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871225 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871230 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871235 | orchestrator | 2026-03-29 00:59:23.871242 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:59:23.871247 | orchestrator | Sunday 29 March 2026 00:53:51 +0000 (0:00:00.328) 0:05:38.442 ********** 2026-03-29 00:59:23.871252 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871257 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871262 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871266 | orchestrator | 2026-03-29 00:59:23.871271 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:59:23.871276 | orchestrator | Sunday 29 March 2026 00:53:51 +0000 (0:00:00.326) 0:05:38.769 ********** 2026-03-29 00:59:23.871281 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871285 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871290 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871295 | orchestrator | 2026-03-29 00:59:23.871299 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:59:23.871304 | orchestrator | Sunday 29 March 2026 00:53:51 +0000 (0:00:00.524) 0:05:39.294 ********** 2026-03-29 00:59:23.871309 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871314 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871318 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871323 | orchestrator | 2026-03-29 00:59:23.871328 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:59:23.871333 | orchestrator | Sunday 29 March 2026 00:53:52 +0000 (0:00:00.322) 0:05:39.617 ********** 2026-03-29 00:59:23.871338 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871343 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871348 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871352 | orchestrator | 2026-03-29 00:59:23.871357 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:59:23.871362 | orchestrator | Sunday 29 March 2026 00:53:52 +0000 (0:00:00.353) 0:05:39.970 ********** 2026-03-29 00:59:23.871367 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871371 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871376 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871381 | orchestrator | 2026-03-29 00:59:23.871385 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-29 00:59:23.871390 | orchestrator | Sunday 29 March 2026 00:53:53 +0000 (0:00:00.774) 0:05:40.745 ********** 2026-03-29 00:59:23.871397 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:59:23.871405 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:59:23.871417 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:59:23.871427 | orchestrator | 2026-03-29 00:59:23.871434 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-29 00:59:23.871442 | orchestrator | Sunday 29 March 2026 00:53:54 +0000 (0:00:00.637) 0:05:41.383 ********** 2026-03-29 00:59:23.871463 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.871470 | orchestrator | 2026-03-29 00:59:23.871478 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-29 00:59:23.871487 | orchestrator | Sunday 29 March 2026 00:53:54 +0000 (0:00:00.546) 0:05:41.929 ********** 2026-03-29 00:59:23.871495 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.871503 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.871511 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.871521 | orchestrator | 2026-03-29 00:59:23.871526 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-29 00:59:23.871530 | orchestrator | Sunday 29 March 2026 00:53:55 +0000 (0:00:00.766) 0:05:42.695 ********** 2026-03-29 00:59:23.871535 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871545 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871549 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871554 | orchestrator | 2026-03-29 00:59:23.871559 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-29 00:59:23.871564 | orchestrator | Sunday 29 March 2026 00:53:55 +0000 (0:00:00.586) 0:05:43.282 ********** 2026-03-29 00:59:23.871569 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:59:23.871574 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:59:23.871578 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:59:23.871583 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-29 00:59:23.871588 | orchestrator | 2026-03-29 00:59:23.871600 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-29 00:59:23.871605 | orchestrator | Sunday 29 March 2026 00:54:06 +0000 (0:00:10.460) 0:05:53.743 ********** 2026-03-29 00:59:23.871610 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871615 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871625 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871630 | orchestrator | 2026-03-29 00:59:23.871635 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-29 00:59:23.871639 | orchestrator | Sunday 29 March 2026 00:54:06 +0000 (0:00:00.394) 0:05:54.138 ********** 2026-03-29 00:59:23.871644 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 00:59:23.871649 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 00:59:23.871654 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 00:59:23.871659 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 00:59:23.871664 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.871690 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.871696 | orchestrator | 2026-03-29 00:59:23.871701 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:59:23.871706 | orchestrator | Sunday 29 March 2026 00:54:09 +0000 (0:00:02.211) 0:05:56.349 ********** 2026-03-29 00:59:23.871710 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 00:59:23.871715 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 00:59:23.871720 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 00:59:23.871731 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 00:59:23.871736 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-29 00:59:23.871741 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-29 00:59:23.871745 | orchestrator | 2026-03-29 00:59:23.871750 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-29 00:59:23.871755 | orchestrator | Sunday 29 March 2026 00:54:10 +0000 (0:00:01.351) 0:05:57.701 ********** 2026-03-29 00:59:23.871760 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.871765 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.871770 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.871775 | orchestrator | 2026-03-29 00:59:23.871780 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-29 00:59:23.871784 | orchestrator | Sunday 29 March 2026 00:54:11 +0000 (0:00:00.983) 0:05:58.685 ********** 2026-03-29 00:59:23.871789 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871794 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871799 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871804 | orchestrator | 2026-03-29 00:59:23.871809 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-29 00:59:23.871814 | orchestrator | Sunday 29 March 2026 00:54:11 +0000 (0:00:00.294) 0:05:58.979 ********** 2026-03-29 00:59:23.871818 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871823 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871828 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871833 | orchestrator | 2026-03-29 00:59:23.871838 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-29 00:59:23.871847 | orchestrator | Sunday 29 March 2026 00:54:11 +0000 (0:00:00.308) 0:05:59.288 ********** 2026-03-29 00:59:23.871852 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.871857 | orchestrator | 2026-03-29 00:59:23.871862 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-29 00:59:23.871867 | orchestrator | Sunday 29 March 2026 00:54:12 +0000 (0:00:00.727) 0:06:00.016 ********** 2026-03-29 00:59:23.871872 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871877 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871882 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871887 | orchestrator | 2026-03-29 00:59:23.871892 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-29 00:59:23.871896 | orchestrator | Sunday 29 March 2026 00:54:13 +0000 (0:00:00.379) 0:06:00.395 ********** 2026-03-29 00:59:23.871901 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.871906 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.871911 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.871916 | orchestrator | 2026-03-29 00:59:23.871921 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-29 00:59:23.871926 | orchestrator | Sunday 29 March 2026 00:54:13 +0000 (0:00:00.393) 0:06:00.789 ********** 2026-03-29 00:59:23.871930 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.871935 | orchestrator | 2026-03-29 00:59:23.871940 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-29 00:59:23.871945 | orchestrator | Sunday 29 March 2026 00:54:14 +0000 (0:00:00.776) 0:06:01.565 ********** 2026-03-29 00:59:23.871950 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.871954 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.871959 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.871964 | orchestrator | 2026-03-29 00:59:23.871969 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-29 00:59:23.871974 | orchestrator | Sunday 29 March 2026 00:54:15 +0000 (0:00:01.254) 0:06:02.820 ********** 2026-03-29 00:59:23.871978 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.871983 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.871988 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.871993 | orchestrator | 2026-03-29 00:59:23.871998 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-29 00:59:23.872002 | orchestrator | Sunday 29 March 2026 00:54:16 +0000 (0:00:01.132) 0:06:03.952 ********** 2026-03-29 00:59:23.872007 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.872012 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.872017 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.872022 | orchestrator | 2026-03-29 00:59:23.872026 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-29 00:59:23.872031 | orchestrator | Sunday 29 March 2026 00:54:18 +0000 (0:00:01.720) 0:06:05.673 ********** 2026-03-29 00:59:23.872036 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.872041 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.872046 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.872050 | orchestrator | 2026-03-29 00:59:23.872055 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-29 00:59:23.872060 | orchestrator | Sunday 29 March 2026 00:54:20 +0000 (0:00:02.242) 0:06:07.916 ********** 2026-03-29 00:59:23.872065 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.872069 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.872074 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-29 00:59:23.872079 | orchestrator | 2026-03-29 00:59:23.872084 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-29 00:59:23.872089 | orchestrator | Sunday 29 March 2026 00:54:20 +0000 (0:00:00.406) 0:06:08.323 ********** 2026-03-29 00:59:23.872110 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-29 00:59:23.872116 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-29 00:59:23.872121 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-29 00:59:23.872126 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-29 00:59:23.872133 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.872138 | orchestrator | 2026-03-29 00:59:23.872143 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-29 00:59:23.872148 | orchestrator | Sunday 29 March 2026 00:54:45 +0000 (0:00:24.563) 0:06:32.886 ********** 2026-03-29 00:59:23.872152 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.872157 | orchestrator | 2026-03-29 00:59:23.872162 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-29 00:59:23.872167 | orchestrator | Sunday 29 March 2026 00:54:46 +0000 (0:00:01.443) 0:06:34.330 ********** 2026-03-29 00:59:23.872172 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.872177 | orchestrator | 2026-03-29 00:59:23.872181 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-29 00:59:23.872186 | orchestrator | Sunday 29 March 2026 00:54:47 +0000 (0:00:00.288) 0:06:34.619 ********** 2026-03-29 00:59:23.872191 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.872196 | orchestrator | 2026-03-29 00:59:23.872200 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-29 00:59:23.872205 | orchestrator | Sunday 29 March 2026 00:54:47 +0000 (0:00:00.149) 0:06:34.769 ********** 2026-03-29 00:59:23.872210 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-29 00:59:23.872215 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-29 00:59:23.872219 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-29 00:59:23.872224 | orchestrator | 2026-03-29 00:59:23.872229 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-29 00:59:23.872234 | orchestrator | Sunday 29 March 2026 00:54:54 +0000 (0:00:06.826) 0:06:41.595 ********** 2026-03-29 00:59:23.872238 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-29 00:59:23.872243 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-29 00:59:23.872248 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-29 00:59:23.872253 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-29 00:59:23.872257 | orchestrator | 2026-03-29 00:59:23.872262 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:59:23.872267 | orchestrator | Sunday 29 March 2026 00:55:00 +0000 (0:00:05.943) 0:06:47.538 ********** 2026-03-29 00:59:23.872272 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.872277 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.872281 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.872286 | orchestrator | 2026-03-29 00:59:23.872291 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-29 00:59:23.872296 | orchestrator | Sunday 29 March 2026 00:55:00 +0000 (0:00:00.645) 0:06:48.183 ********** 2026-03-29 00:59:23.872300 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.872305 | orchestrator | 2026-03-29 00:59:23.872310 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-29 00:59:23.872315 | orchestrator | Sunday 29 March 2026 00:55:01 +0000 (0:00:00.646) 0:06:48.830 ********** 2026-03-29 00:59:23.872319 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.872327 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.872332 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.872337 | orchestrator | 2026-03-29 00:59:23.872342 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-29 00:59:23.872346 | orchestrator | Sunday 29 March 2026 00:55:01 +0000 (0:00:00.308) 0:06:49.138 ********** 2026-03-29 00:59:23.872351 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.872356 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.872361 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.872366 | orchestrator | 2026-03-29 00:59:23.872370 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-29 00:59:23.872375 | orchestrator | Sunday 29 March 2026 00:55:02 +0000 (0:00:01.095) 0:06:50.234 ********** 2026-03-29 00:59:23.872380 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-29 00:59:23.872385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-29 00:59:23.872390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-29 00:59:23.872394 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.872399 | orchestrator | 2026-03-29 00:59:23.872404 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-29 00:59:23.872409 | orchestrator | Sunday 29 March 2026 00:55:03 +0000 (0:00:00.559) 0:06:50.793 ********** 2026-03-29 00:59:23.872414 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.872419 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.872423 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.872428 | orchestrator | 2026-03-29 00:59:23.872433 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-29 00:59:23.872438 | orchestrator | 2026-03-29 00:59:23.872443 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:59:23.872457 | orchestrator | Sunday 29 March 2026 00:55:04 +0000 (0:00:00.664) 0:06:51.458 ********** 2026-03-29 00:59:23.872462 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.872467 | orchestrator | 2026-03-29 00:59:23.872487 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:59:23.872492 | orchestrator | Sunday 29 March 2026 00:55:04 +0000 (0:00:00.444) 0:06:51.902 ********** 2026-03-29 00:59:23.872497 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.872502 | orchestrator | 2026-03-29 00:59:23.872507 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:59:23.872514 | orchestrator | Sunday 29 March 2026 00:55:05 +0000 (0:00:00.625) 0:06:52.527 ********** 2026-03-29 00:59:23.872519 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872524 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872529 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872534 | orchestrator | 2026-03-29 00:59:23.872539 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:59:23.872544 | orchestrator | Sunday 29 March 2026 00:55:05 +0000 (0:00:00.309) 0:06:52.837 ********** 2026-03-29 00:59:23.872548 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872553 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872558 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872563 | orchestrator | 2026-03-29 00:59:23.872568 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:59:23.872572 | orchestrator | Sunday 29 March 2026 00:55:06 +0000 (0:00:00.621) 0:06:53.458 ********** 2026-03-29 00:59:23.872577 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872582 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872587 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872592 | orchestrator | 2026-03-29 00:59:23.872596 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:59:23.872601 | orchestrator | Sunday 29 March 2026 00:55:06 +0000 (0:00:00.635) 0:06:54.094 ********** 2026-03-29 00:59:23.872610 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872614 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872619 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872624 | orchestrator | 2026-03-29 00:59:23.872629 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:59:23.872634 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.865) 0:06:54.959 ********** 2026-03-29 00:59:23.872639 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872644 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872648 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872653 | orchestrator | 2026-03-29 00:59:23.872658 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:59:23.872663 | orchestrator | Sunday 29 March 2026 00:55:07 +0000 (0:00:00.270) 0:06:55.229 ********** 2026-03-29 00:59:23.872668 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872673 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872678 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872683 | orchestrator | 2026-03-29 00:59:23.872688 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:59:23.872692 | orchestrator | Sunday 29 March 2026 00:55:08 +0000 (0:00:00.273) 0:06:55.502 ********** 2026-03-29 00:59:23.872697 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872702 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872707 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872712 | orchestrator | 2026-03-29 00:59:23.872716 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:59:23.872721 | orchestrator | Sunday 29 March 2026 00:55:08 +0000 (0:00:00.274) 0:06:55.777 ********** 2026-03-29 00:59:23.872726 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872731 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872736 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872740 | orchestrator | 2026-03-29 00:59:23.872745 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:59:23.872750 | orchestrator | Sunday 29 March 2026 00:55:09 +0000 (0:00:00.870) 0:06:56.647 ********** 2026-03-29 00:59:23.872755 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872760 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872764 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872769 | orchestrator | 2026-03-29 00:59:23.872774 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:59:23.872779 | orchestrator | Sunday 29 March 2026 00:55:09 +0000 (0:00:00.625) 0:06:57.273 ********** 2026-03-29 00:59:23.872783 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872788 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872793 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872798 | orchestrator | 2026-03-29 00:59:23.872803 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:59:23.872807 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:00.256) 0:06:57.530 ********** 2026-03-29 00:59:23.872812 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872817 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872822 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872826 | orchestrator | 2026-03-29 00:59:23.872831 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:59:23.872836 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:00.290) 0:06:57.820 ********** 2026-03-29 00:59:23.872841 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872846 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872851 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872855 | orchestrator | 2026-03-29 00:59:23.872860 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:59:23.872865 | orchestrator | Sunday 29 March 2026 00:55:10 +0000 (0:00:00.435) 0:06:58.256 ********** 2026-03-29 00:59:23.872870 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872885 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872890 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872895 | orchestrator | 2026-03-29 00:59:23.872900 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:59:23.872905 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.291) 0:06:58.547 ********** 2026-03-29 00:59:23.872910 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.872914 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.872919 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.872924 | orchestrator | 2026-03-29 00:59:23.872929 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:59:23.872936 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.280) 0:06:58.828 ********** 2026-03-29 00:59:23.872941 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872946 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872951 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872955 | orchestrator | 2026-03-29 00:59:23.872960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:59:23.872965 | orchestrator | Sunday 29 March 2026 00:55:11 +0000 (0:00:00.264) 0:06:59.092 ********** 2026-03-29 00:59:23.872970 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.872977 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.872982 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.872987 | orchestrator | 2026-03-29 00:59:23.872992 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:59:23.872997 | orchestrator | Sunday 29 March 2026 00:55:12 +0000 (0:00:00.448) 0:06:59.541 ********** 2026-03-29 00:59:23.873002 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.873006 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.873011 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.873016 | orchestrator | 2026-03-29 00:59:23.873021 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:59:23.873026 | orchestrator | Sunday 29 March 2026 00:55:12 +0000 (0:00:00.257) 0:06:59.799 ********** 2026-03-29 00:59:23.873030 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.873035 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.873040 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.873045 | orchestrator | 2026-03-29 00:59:23.873050 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:59:23.873055 | orchestrator | Sunday 29 March 2026 00:55:12 +0000 (0:00:00.292) 0:07:00.091 ********** 2026-03-29 00:59:23.873059 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.873064 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.873069 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.873074 | orchestrator | 2026-03-29 00:59:23.873079 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-29 00:59:23.873084 | orchestrator | Sunday 29 March 2026 00:55:13 +0000 (0:00:00.640) 0:07:00.732 ********** 2026-03-29 00:59:23.873088 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.873093 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.873098 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.873102 | orchestrator | 2026-03-29 00:59:23.873107 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-29 00:59:23.873112 | orchestrator | Sunday 29 March 2026 00:55:13 +0000 (0:00:00.297) 0:07:01.029 ********** 2026-03-29 00:59:23.873117 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 00:59:23.873122 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 00:59:23.873127 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 00:59:23.873131 | orchestrator | 2026-03-29 00:59:23.873136 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-29 00:59:23.873141 | orchestrator | Sunday 29 March 2026 00:55:14 +0000 (0:00:00.586) 0:07:01.617 ********** 2026-03-29 00:59:23.873146 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.873154 | orchestrator | 2026-03-29 00:59:23.873159 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-29 00:59:23.873163 | orchestrator | Sunday 29 March 2026 00:55:14 +0000 (0:00:00.432) 0:07:02.049 ********** 2026-03-29 00:59:23.873168 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.873173 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.873178 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.873183 | orchestrator | 2026-03-29 00:59:23.873187 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-29 00:59:23.873192 | orchestrator | Sunday 29 March 2026 00:55:15 +0000 (0:00:00.449) 0:07:02.499 ********** 2026-03-29 00:59:23.873197 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.873202 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.873207 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.873212 | orchestrator | 2026-03-29 00:59:23.873216 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-29 00:59:23.873221 | orchestrator | Sunday 29 March 2026 00:55:15 +0000 (0:00:00.280) 0:07:02.779 ********** 2026-03-29 00:59:23.873226 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.873231 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.873235 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.873240 | orchestrator | 2026-03-29 00:59:23.873245 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-29 00:59:23.873250 | orchestrator | Sunday 29 March 2026 00:55:16 +0000 (0:00:00.648) 0:07:03.428 ********** 2026-03-29 00:59:23.873255 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.873259 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.873264 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.873269 | orchestrator | 2026-03-29 00:59:23.873274 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-29 00:59:23.873278 | orchestrator | Sunday 29 March 2026 00:55:16 +0000 (0:00:00.292) 0:07:03.721 ********** 2026-03-29 00:59:23.873283 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 00:59:23.873288 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 00:59:23.873293 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-29 00:59:23.873298 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 00:59:23.873302 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 00:59:23.873307 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-29 00:59:23.873315 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 00:59:23.873320 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 00:59:23.873325 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-29 00:59:23.873330 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 00:59:23.873337 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 00:59:23.873342 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-29 00:59:23.873347 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 00:59:23.873351 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 00:59:23.873356 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-29 00:59:23.873361 | orchestrator | 2026-03-29 00:59:23.873365 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-29 00:59:23.873373 | orchestrator | Sunday 29 March 2026 00:55:18 +0000 (0:00:02.394) 0:07:06.115 ********** 2026-03-29 00:59:23.873378 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.873382 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.873387 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.873392 | orchestrator | 2026-03-29 00:59:23.873397 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-29 00:59:23.873402 | orchestrator | Sunday 29 March 2026 00:55:19 +0000 (0:00:00.306) 0:07:06.422 ********** 2026-03-29 00:59:23.873407 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.873412 | orchestrator | 2026-03-29 00:59:23.873417 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-29 00:59:23.873421 | orchestrator | Sunday 29 March 2026 00:55:19 +0000 (0:00:00.535) 0:07:06.957 ********** 2026-03-29 00:59:23.873426 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 00:59:23.873431 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 00:59:23.873436 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-29 00:59:23.873441 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-29 00:59:23.873446 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-29 00:59:23.873477 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-29 00:59:23.873485 | orchestrator | 2026-03-29 00:59:23.873493 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-29 00:59:23.873501 | orchestrator | Sunday 29 March 2026 00:55:20 +0000 (0:00:01.322) 0:07:08.280 ********** 2026-03-29 00:59:23.873509 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.873516 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:59:23.873524 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:59:23.873531 | orchestrator | 2026-03-29 00:59:23.873538 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:59:23.873545 | orchestrator | Sunday 29 March 2026 00:55:23 +0000 (0:00:02.229) 0:07:10.510 ********** 2026-03-29 00:59:23.873552 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:59:23.873560 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:59:23.873566 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.873573 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:59:23.873580 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 00:59:23.873587 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.873595 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:59:23.873603 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 00:59:23.873611 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.873619 | orchestrator | 2026-03-29 00:59:23.873627 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-29 00:59:23.873635 | orchestrator | Sunday 29 March 2026 00:55:24 +0000 (0:00:01.285) 0:07:11.795 ********** 2026-03-29 00:59:23.873643 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.873651 | orchestrator | 2026-03-29 00:59:23.873658 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-29 00:59:23.873665 | orchestrator | Sunday 29 March 2026 00:55:27 +0000 (0:00:02.691) 0:07:14.487 ********** 2026-03-29 00:59:23.873672 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.873680 | orchestrator | 2026-03-29 00:59:23.873688 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-29 00:59:23.873696 | orchestrator | Sunday 29 March 2026 00:55:27 +0000 (0:00:00.620) 0:07:15.107 ********** 2026-03-29 00:59:23.873704 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-00df2b4e-a360-5652-a277-e346f3e9f535', 'data_vg': 'ceph-00df2b4e-a360-5652-a277-e346f3e9f535'}) 2026-03-29 00:59:23.873715 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ec951f8f-e82d-5973-b083-619786b6a4a7', 'data_vg': 'ceph-ec951f8f-e82d-5973-b083-619786b6a4a7'}) 2026-03-29 00:59:23.873720 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-687a2d88-e62e-55f7-9995-e7b8ae522292', 'data_vg': 'ceph-687a2d88-e62e-55f7-9995-e7b8ae522292'}) 2026-03-29 00:59:23.873729 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-35a0cf9a-662c-5baf-94a5-8e3a66aae069', 'data_vg': 'ceph-35a0cf9a-662c-5baf-94a5-8e3a66aae069'}) 2026-03-29 00:59:23.873734 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-fb9b884b-e3c0-524d-8e95-f889faf8bdb8', 'data_vg': 'ceph-fb9b884b-e3c0-524d-8e95-f889faf8bdb8'}) 2026-03-29 00:59:23.873742 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b95a2846-f14f-5a7d-ae9e-15318cf5fdef', 'data_vg': 'ceph-b95a2846-f14f-5a7d-ae9e-15318cf5fdef'}) 2026-03-29 00:59:23.873747 | orchestrator | 2026-03-29 00:59:23.873752 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-29 00:59:23.873757 | orchestrator | Sunday 29 March 2026 00:56:05 +0000 (0:00:38.116) 0:07:53.224 ********** 2026-03-29 00:59:23.873762 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.873767 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.873772 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.873776 | orchestrator | 2026-03-29 00:59:23.873781 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-29 00:59:23.873786 | orchestrator | Sunday 29 March 2026 00:56:06 +0000 (0:00:00.312) 0:07:53.536 ********** 2026-03-29 00:59:23.873791 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.873796 | orchestrator | 2026-03-29 00:59:23.873801 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-29 00:59:23.873805 | orchestrator | Sunday 29 March 2026 00:56:06 +0000 (0:00:00.742) 0:07:54.279 ********** 2026-03-29 00:59:23.873810 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.873815 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.873820 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.873825 | orchestrator | 2026-03-29 00:59:23.873829 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-29 00:59:23.873834 | orchestrator | Sunday 29 March 2026 00:56:07 +0000 (0:00:00.677) 0:07:54.956 ********** 2026-03-29 00:59:23.873839 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.873844 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.873849 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.873853 | orchestrator | 2026-03-29 00:59:23.873858 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-29 00:59:23.873863 | orchestrator | Sunday 29 March 2026 00:56:10 +0000 (0:00:03.130) 0:07:58.087 ********** 2026-03-29 00:59:23.873868 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.873873 | orchestrator | 2026-03-29 00:59:23.873878 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-29 00:59:23.873883 | orchestrator | Sunday 29 March 2026 00:56:11 +0000 (0:00:00.633) 0:07:58.720 ********** 2026-03-29 00:59:23.873887 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.873892 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.873897 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.873902 | orchestrator | 2026-03-29 00:59:23.873907 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-29 00:59:23.873911 | orchestrator | Sunday 29 March 2026 00:56:12 +0000 (0:00:01.073) 0:07:59.794 ********** 2026-03-29 00:59:23.873916 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.873921 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.873926 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.873934 | orchestrator | 2026-03-29 00:59:23.873938 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-29 00:59:23.873943 | orchestrator | Sunday 29 March 2026 00:56:13 +0000 (0:00:01.063) 0:08:00.857 ********** 2026-03-29 00:59:23.873948 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.873953 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.873958 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.873962 | orchestrator | 2026-03-29 00:59:23.873967 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-29 00:59:23.873972 | orchestrator | Sunday 29 March 2026 00:56:15 +0000 (0:00:01.852) 0:08:02.710 ********** 2026-03-29 00:59:23.873977 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.873982 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.873986 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.873991 | orchestrator | 2026-03-29 00:59:23.873996 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-29 00:59:23.874001 | orchestrator | Sunday 29 March 2026 00:56:15 +0000 (0:00:00.346) 0:08:03.056 ********** 2026-03-29 00:59:23.874006 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874010 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874045 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874050 | orchestrator | 2026-03-29 00:59:23.874054 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-29 00:59:23.874059 | orchestrator | Sunday 29 March 2026 00:56:16 +0000 (0:00:00.621) 0:08:03.677 ********** 2026-03-29 00:59:23.874063 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-29 00:59:23.874068 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-29 00:59:23.874073 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-03-29 00:59:23.874077 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 00:59:23.874082 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-29 00:59:23.874086 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-29 00:59:23.874091 | orchestrator | 2026-03-29 00:59:23.874095 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-29 00:59:23.874100 | orchestrator | Sunday 29 March 2026 00:56:17 +0000 (0:00:01.131) 0:08:04.809 ********** 2026-03-29 00:59:23.874104 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-29 00:59:23.874109 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-29 00:59:23.874114 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-29 00:59:23.874118 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-29 00:59:23.874123 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-29 00:59:23.874127 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-29 00:59:23.874132 | orchestrator | 2026-03-29 00:59:23.874139 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-29 00:59:23.874144 | orchestrator | Sunday 29 March 2026 00:56:19 +0000 (0:00:02.464) 0:08:07.273 ********** 2026-03-29 00:59:23.874149 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-29 00:59:23.874153 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-03-29 00:59:23.874158 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-29 00:59:23.874162 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-29 00:59:23.874167 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-29 00:59:23.874171 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-29 00:59:23.874176 | orchestrator | 2026-03-29 00:59:23.874185 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-29 00:59:23.874190 | orchestrator | Sunday 29 March 2026 00:56:23 +0000 (0:00:03.967) 0:08:11.240 ********** 2026-03-29 00:59:23.874195 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874199 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874204 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.874208 | orchestrator | 2026-03-29 00:59:23.874213 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-29 00:59:23.874221 | orchestrator | Sunday 29 March 2026 00:56:26 +0000 (0:00:02.755) 0:08:13.996 ********** 2026-03-29 00:59:23.874225 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874230 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874234 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-29 00:59:23.874239 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.874243 | orchestrator | 2026-03-29 00:59:23.874248 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-29 00:59:23.874252 | orchestrator | Sunday 29 March 2026 00:56:38 +0000 (0:00:12.119) 0:08:26.115 ********** 2026-03-29 00:59:23.874257 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874261 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874266 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874270 | orchestrator | 2026-03-29 00:59:23.874275 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:59:23.874279 | orchestrator | Sunday 29 March 2026 00:56:39 +0000 (0:00:01.166) 0:08:27.281 ********** 2026-03-29 00:59:23.874284 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874288 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874293 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874297 | orchestrator | 2026-03-29 00:59:23.874302 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-29 00:59:23.874306 | orchestrator | Sunday 29 March 2026 00:56:40 +0000 (0:00:00.382) 0:08:27.664 ********** 2026-03-29 00:59:23.874311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.874315 | orchestrator | 2026-03-29 00:59:23.874320 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-29 00:59:23.874324 | orchestrator | Sunday 29 March 2026 00:56:40 +0000 (0:00:00.467) 0:08:28.131 ********** 2026-03-29 00:59:23.874329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.874333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.874338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.874343 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874347 | orchestrator | 2026-03-29 00:59:23.874352 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-29 00:59:23.874356 | orchestrator | Sunday 29 March 2026 00:56:41 +0000 (0:00:00.839) 0:08:28.971 ********** 2026-03-29 00:59:23.874361 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874365 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874370 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874374 | orchestrator | 2026-03-29 00:59:23.874379 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-29 00:59:23.874383 | orchestrator | Sunday 29 March 2026 00:56:42 +0000 (0:00:00.382) 0:08:29.354 ********** 2026-03-29 00:59:23.874388 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874393 | orchestrator | 2026-03-29 00:59:23.874397 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-29 00:59:23.874402 | orchestrator | Sunday 29 March 2026 00:56:42 +0000 (0:00:00.272) 0:08:29.627 ********** 2026-03-29 00:59:23.874406 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874411 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874415 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874420 | orchestrator | 2026-03-29 00:59:23.874424 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-29 00:59:23.874429 | orchestrator | Sunday 29 March 2026 00:56:42 +0000 (0:00:00.364) 0:08:29.991 ********** 2026-03-29 00:59:23.874433 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874438 | orchestrator | 2026-03-29 00:59:23.874443 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-29 00:59:23.874461 | orchestrator | Sunday 29 March 2026 00:56:42 +0000 (0:00:00.250) 0:08:30.242 ********** 2026-03-29 00:59:23.874470 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874474 | orchestrator | 2026-03-29 00:59:23.874479 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-29 00:59:23.874484 | orchestrator | Sunday 29 March 2026 00:56:43 +0000 (0:00:00.224) 0:08:30.467 ********** 2026-03-29 00:59:23.874488 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874493 | orchestrator | 2026-03-29 00:59:23.874497 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-29 00:59:23.874502 | orchestrator | Sunday 29 March 2026 00:56:43 +0000 (0:00:00.135) 0:08:30.603 ********** 2026-03-29 00:59:23.874507 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874511 | orchestrator | 2026-03-29 00:59:23.874516 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-29 00:59:23.874520 | orchestrator | Sunday 29 March 2026 00:56:43 +0000 (0:00:00.237) 0:08:30.840 ********** 2026-03-29 00:59:23.874528 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874532 | orchestrator | 2026-03-29 00:59:23.874537 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-29 00:59:23.874541 | orchestrator | Sunday 29 March 2026 00:56:44 +0000 (0:00:00.819) 0:08:31.660 ********** 2026-03-29 00:59:23.874546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.874550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.874555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.874562 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874567 | orchestrator | 2026-03-29 00:59:23.874571 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-29 00:59:23.874576 | orchestrator | Sunday 29 March 2026 00:56:44 +0000 (0:00:00.400) 0:08:32.060 ********** 2026-03-29 00:59:23.874581 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874585 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874590 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874594 | orchestrator | 2026-03-29 00:59:23.874599 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-29 00:59:23.874603 | orchestrator | Sunday 29 March 2026 00:56:45 +0000 (0:00:00.313) 0:08:32.374 ********** 2026-03-29 00:59:23.874608 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874612 | orchestrator | 2026-03-29 00:59:23.874617 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-29 00:59:23.874621 | orchestrator | Sunday 29 March 2026 00:56:45 +0000 (0:00:00.228) 0:08:32.602 ********** 2026-03-29 00:59:23.874626 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874630 | orchestrator | 2026-03-29 00:59:23.874635 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-29 00:59:23.874639 | orchestrator | 2026-03-29 00:59:23.874644 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:59:23.874649 | orchestrator | Sunday 29 March 2026 00:56:46 +0000 (0:00:00.910) 0:08:33.513 ********** 2026-03-29 00:59:23.874653 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.874659 | orchestrator | 2026-03-29 00:59:23.874663 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:59:23.874668 | orchestrator | Sunday 29 March 2026 00:56:47 +0000 (0:00:01.229) 0:08:34.742 ********** 2026-03-29 00:59:23.874672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.874677 | orchestrator | 2026-03-29 00:59:23.874681 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:59:23.874686 | orchestrator | Sunday 29 March 2026 00:56:48 +0000 (0:00:01.004) 0:08:35.747 ********** 2026-03-29 00:59:23.874691 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874698 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874703 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874707 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.874712 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.874716 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.874721 | orchestrator | 2026-03-29 00:59:23.874725 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:59:23.874730 | orchestrator | Sunday 29 March 2026 00:56:49 +0000 (0:00:01.216) 0:08:36.963 ********** 2026-03-29 00:59:23.874734 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.874739 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.874743 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.874748 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.874752 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.874757 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.874762 | orchestrator | 2026-03-29 00:59:23.874766 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:59:23.874771 | orchestrator | Sunday 29 March 2026 00:56:50 +0000 (0:00:00.694) 0:08:37.658 ********** 2026-03-29 00:59:23.874775 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.874780 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.874784 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.874789 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.874793 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.874798 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.874802 | orchestrator | 2026-03-29 00:59:23.874807 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:59:23.874811 | orchestrator | Sunday 29 March 2026 00:56:51 +0000 (0:00:01.047) 0:08:38.705 ********** 2026-03-29 00:59:23.874816 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.874821 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.874825 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.874830 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.874834 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.874839 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.874843 | orchestrator | 2026-03-29 00:59:23.874848 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:59:23.874854 | orchestrator | Sunday 29 March 2026 00:56:52 +0000 (0:00:00.777) 0:08:39.482 ********** 2026-03-29 00:59:23.874861 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874871 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874882 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874889 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.874896 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.874903 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.874909 | orchestrator | 2026-03-29 00:59:23.874916 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:59:23.874923 | orchestrator | Sunday 29 March 2026 00:56:53 +0000 (0:00:01.037) 0:08:40.520 ********** 2026-03-29 00:59:23.874929 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.874936 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.874943 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.874949 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.874955 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.874966 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.874973 | orchestrator | 2026-03-29 00:59:23.874980 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:59:23.874987 | orchestrator | Sunday 29 March 2026 00:56:53 +0000 (0:00:00.586) 0:08:41.106 ********** 2026-03-29 00:59:23.874994 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.875001 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.875008 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.875015 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.875022 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.875038 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.875046 | orchestrator | 2026-03-29 00:59:23.875054 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:59:23.875061 | orchestrator | Sunday 29 March 2026 00:56:54 +0000 (0:00:00.674) 0:08:41.781 ********** 2026-03-29 00:59:23.875068 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875076 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875084 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875090 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875095 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875099 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875104 | orchestrator | 2026-03-29 00:59:23.875109 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:59:23.875113 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.878) 0:08:42.659 ********** 2026-03-29 00:59:23.875118 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875123 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875127 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875131 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875136 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875140 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875145 | orchestrator | 2026-03-29 00:59:23.875149 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:59:23.875154 | orchestrator | Sunday 29 March 2026 00:56:56 +0000 (0:00:01.123) 0:08:43.783 ********** 2026-03-29 00:59:23.875159 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.875163 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.875168 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.875172 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.875177 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.875181 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.875186 | orchestrator | 2026-03-29 00:59:23.875190 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:59:23.875195 | orchestrator | Sunday 29 March 2026 00:56:56 +0000 (0:00:00.507) 0:08:44.290 ********** 2026-03-29 00:59:23.875199 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.875204 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.875208 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.875213 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875218 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875222 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875227 | orchestrator | 2026-03-29 00:59:23.875231 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:59:23.875236 | orchestrator | Sunday 29 March 2026 00:56:57 +0000 (0:00:00.697) 0:08:44.987 ********** 2026-03-29 00:59:23.875241 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875245 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875250 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875254 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.875259 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.875263 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.875268 | orchestrator | 2026-03-29 00:59:23.875272 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:59:23.875277 | orchestrator | Sunday 29 March 2026 00:56:58 +0000 (0:00:00.517) 0:08:45.505 ********** 2026-03-29 00:59:23.875281 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875286 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875290 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875295 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.875299 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.875304 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.875308 | orchestrator | 2026-03-29 00:59:23.875313 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:59:23.875317 | orchestrator | Sunday 29 March 2026 00:56:58 +0000 (0:00:00.704) 0:08:46.209 ********** 2026-03-29 00:59:23.875325 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875330 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875335 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875339 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.875344 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.875348 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.875353 | orchestrator | 2026-03-29 00:59:23.875357 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:59:23.875362 | orchestrator | Sunday 29 March 2026 00:56:59 +0000 (0:00:00.533) 0:08:46.743 ********** 2026-03-29 00:59:23.875366 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.875371 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.875375 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.875380 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.875384 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.875389 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.875393 | orchestrator | 2026-03-29 00:59:23.875398 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:59:23.875402 | orchestrator | Sunday 29 March 2026 00:57:00 +0000 (0:00:00.858) 0:08:47.602 ********** 2026-03-29 00:59:23.875407 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.875412 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.875416 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.875421 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:23.875425 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:23.875430 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:23.875434 | orchestrator | 2026-03-29 00:59:23.875439 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:59:23.875443 | orchestrator | Sunday 29 March 2026 00:57:00 +0000 (0:00:00.666) 0:08:48.269 ********** 2026-03-29 00:59:23.875478 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.875483 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.875488 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.875492 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875501 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875505 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875510 | orchestrator | 2026-03-29 00:59:23.875515 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:59:23.875519 | orchestrator | Sunday 29 March 2026 00:57:01 +0000 (0:00:00.770) 0:08:49.039 ********** 2026-03-29 00:59:23.875524 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875529 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875533 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875538 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875542 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875550 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875555 | orchestrator | 2026-03-29 00:59:23.875560 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:59:23.875564 | orchestrator | Sunday 29 March 2026 00:57:02 +0000 (0:00:00.636) 0:08:49.676 ********** 2026-03-29 00:59:23.875569 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875573 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875578 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875583 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875587 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875592 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875596 | orchestrator | 2026-03-29 00:59:23.875601 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-29 00:59:23.875605 | orchestrator | Sunday 29 March 2026 00:57:03 +0000 (0:00:01.278) 0:08:50.954 ********** 2026-03-29 00:59:23.875609 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.875613 | orchestrator | 2026-03-29 00:59:23.875618 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-29 00:59:23.875625 | orchestrator | Sunday 29 March 2026 00:57:07 +0000 (0:00:03.661) 0:08:54.616 ********** 2026-03-29 00:59:23.875629 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.875633 | orchestrator | 2026-03-29 00:59:23.875637 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-29 00:59:23.875642 | orchestrator | Sunday 29 March 2026 00:57:09 +0000 (0:00:01.888) 0:08:56.505 ********** 2026-03-29 00:59:23.875646 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.875650 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.875654 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.875658 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875663 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.875667 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.875671 | orchestrator | 2026-03-29 00:59:23.875675 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-29 00:59:23.875680 | orchestrator | Sunday 29 March 2026 00:57:11 +0000 (0:00:01.930) 0:08:58.435 ********** 2026-03-29 00:59:23.875684 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.875688 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.875692 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.875696 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.875701 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.875705 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.875709 | orchestrator | 2026-03-29 00:59:23.875713 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-29 00:59:23.875717 | orchestrator | Sunday 29 March 2026 00:57:12 +0000 (0:00:01.045) 0:08:59.480 ********** 2026-03-29 00:59:23.875722 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.875727 | orchestrator | 2026-03-29 00:59:23.875731 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-29 00:59:23.875735 | orchestrator | Sunday 29 March 2026 00:57:13 +0000 (0:00:01.326) 0:09:00.807 ********** 2026-03-29 00:59:23.875739 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.875743 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.875747 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.875752 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.875756 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.875760 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.875764 | orchestrator | 2026-03-29 00:59:23.875768 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-29 00:59:23.875773 | orchestrator | Sunday 29 March 2026 00:57:15 +0000 (0:00:01.795) 0:09:02.603 ********** 2026-03-29 00:59:23.875777 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.875781 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.875785 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.875789 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.875794 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.875798 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.875802 | orchestrator | 2026-03-29 00:59:23.875806 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-29 00:59:23.875810 | orchestrator | Sunday 29 March 2026 00:57:18 +0000 (0:00:03.639) 0:09:06.242 ********** 2026-03-29 00:59:23.875815 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:23.875819 | orchestrator | 2026-03-29 00:59:23.875823 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-29 00:59:23.875827 | orchestrator | Sunday 29 March 2026 00:57:19 +0000 (0:00:01.068) 0:09:07.311 ********** 2026-03-29 00:59:23.875832 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875836 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875840 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875848 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875853 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875857 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875861 | orchestrator | 2026-03-29 00:59:23.875865 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-29 00:59:23.875869 | orchestrator | Sunday 29 March 2026 00:57:20 +0000 (0:00:00.663) 0:09:07.974 ********** 2026-03-29 00:59:23.875874 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.875878 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.875882 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.875886 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:23.875893 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:23.875897 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:23.875901 | orchestrator | 2026-03-29 00:59:23.875905 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-29 00:59:23.875910 | orchestrator | Sunday 29 March 2026 00:57:23 +0000 (0:00:02.528) 0:09:10.503 ********** 2026-03-29 00:59:23.875914 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.875918 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.875922 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.875926 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:23.875930 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:23.875937 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:23.875941 | orchestrator | 2026-03-29 00:59:23.875945 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-29 00:59:23.875949 | orchestrator | 2026-03-29 00:59:23.875953 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:59:23.875957 | orchestrator | Sunday 29 March 2026 00:57:24 +0000 (0:00:01.028) 0:09:11.531 ********** 2026-03-29 00:59:23.875962 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.875966 | orchestrator | 2026-03-29 00:59:23.875970 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:59:23.875974 | orchestrator | Sunday 29 March 2026 00:57:24 +0000 (0:00:00.555) 0:09:12.086 ********** 2026-03-29 00:59:23.875978 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.875983 | orchestrator | 2026-03-29 00:59:23.875987 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:59:23.875991 | orchestrator | Sunday 29 March 2026 00:57:25 +0000 (0:00:00.867) 0:09:12.953 ********** 2026-03-29 00:59:23.875995 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.875999 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876003 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876008 | orchestrator | 2026-03-29 00:59:23.876016 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:59:23.876023 | orchestrator | Sunday 29 March 2026 00:57:25 +0000 (0:00:00.294) 0:09:13.248 ********** 2026-03-29 00:59:23.876029 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876035 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876042 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876048 | orchestrator | 2026-03-29 00:59:23.876055 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:59:23.876061 | orchestrator | Sunday 29 March 2026 00:57:26 +0000 (0:00:00.712) 0:09:13.961 ********** 2026-03-29 00:59:23.876066 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876072 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876078 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876084 | orchestrator | 2026-03-29 00:59:23.876091 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:59:23.876097 | orchestrator | Sunday 29 March 2026 00:57:27 +0000 (0:00:00.787) 0:09:14.748 ********** 2026-03-29 00:59:23.876103 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876109 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876120 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876126 | orchestrator | 2026-03-29 00:59:23.876133 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:59:23.876141 | orchestrator | Sunday 29 March 2026 00:57:28 +0000 (0:00:00.676) 0:09:15.425 ********** 2026-03-29 00:59:23.876147 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876154 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876161 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876168 | orchestrator | 2026-03-29 00:59:23.876174 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:59:23.876178 | orchestrator | Sunday 29 March 2026 00:57:28 +0000 (0:00:00.263) 0:09:15.689 ********** 2026-03-29 00:59:23.876182 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876186 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876190 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876194 | orchestrator | 2026-03-29 00:59:23.876198 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:59:23.876203 | orchestrator | Sunday 29 March 2026 00:57:28 +0000 (0:00:00.303) 0:09:15.992 ********** 2026-03-29 00:59:23.876207 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876211 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876215 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876219 | orchestrator | 2026-03-29 00:59:23.876223 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:59:23.876227 | orchestrator | Sunday 29 March 2026 00:57:29 +0000 (0:00:00.561) 0:09:16.554 ********** 2026-03-29 00:59:23.876232 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876236 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876240 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876244 | orchestrator | 2026-03-29 00:59:23.876248 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:59:23.876252 | orchestrator | Sunday 29 March 2026 00:57:29 +0000 (0:00:00.702) 0:09:17.256 ********** 2026-03-29 00:59:23.876256 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876260 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876264 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876268 | orchestrator | 2026-03-29 00:59:23.876272 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:59:23.876277 | orchestrator | Sunday 29 March 2026 00:57:30 +0000 (0:00:00.709) 0:09:17.966 ********** 2026-03-29 00:59:23.876281 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876285 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876289 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876293 | orchestrator | 2026-03-29 00:59:23.876297 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:59:23.876301 | orchestrator | Sunday 29 March 2026 00:57:30 +0000 (0:00:00.365) 0:09:18.332 ********** 2026-03-29 00:59:23.876305 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876309 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876313 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876318 | orchestrator | 2026-03-29 00:59:23.876325 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:59:23.876329 | orchestrator | Sunday 29 March 2026 00:57:31 +0000 (0:00:00.620) 0:09:18.952 ********** 2026-03-29 00:59:23.876333 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876338 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876342 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876346 | orchestrator | 2026-03-29 00:59:23.876350 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:59:23.876354 | orchestrator | Sunday 29 March 2026 00:57:31 +0000 (0:00:00.342) 0:09:19.294 ********** 2026-03-29 00:59:23.876361 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876365 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876369 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876373 | orchestrator | 2026-03-29 00:59:23.876381 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:59:23.876385 | orchestrator | Sunday 29 March 2026 00:57:32 +0000 (0:00:00.378) 0:09:19.673 ********** 2026-03-29 00:59:23.876389 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876393 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876397 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876401 | orchestrator | 2026-03-29 00:59:23.876405 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:59:23.876409 | orchestrator | Sunday 29 March 2026 00:57:32 +0000 (0:00:00.347) 0:09:20.021 ********** 2026-03-29 00:59:23.876414 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876418 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876422 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876426 | orchestrator | 2026-03-29 00:59:23.876430 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:59:23.876434 | orchestrator | Sunday 29 March 2026 00:57:33 +0000 (0:00:00.584) 0:09:20.606 ********** 2026-03-29 00:59:23.876438 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876443 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876457 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876465 | orchestrator | 2026-03-29 00:59:23.876469 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:59:23.876473 | orchestrator | Sunday 29 March 2026 00:57:33 +0000 (0:00:00.308) 0:09:20.914 ********** 2026-03-29 00:59:23.876477 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876482 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876486 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876490 | orchestrator | 2026-03-29 00:59:23.876494 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:59:23.876498 | orchestrator | Sunday 29 March 2026 00:57:33 +0000 (0:00:00.315) 0:09:21.230 ********** 2026-03-29 00:59:23.876502 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876506 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876510 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876514 | orchestrator | 2026-03-29 00:59:23.876518 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:59:23.876523 | orchestrator | Sunday 29 March 2026 00:57:34 +0000 (0:00:00.325) 0:09:21.555 ********** 2026-03-29 00:59:23.876527 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876531 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876535 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876539 | orchestrator | 2026-03-29 00:59:23.876543 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-29 00:59:23.876547 | orchestrator | Sunday 29 March 2026 00:57:35 +0000 (0:00:00.917) 0:09:22.473 ********** 2026-03-29 00:59:23.876551 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876555 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876560 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-29 00:59:23.876564 | orchestrator | 2026-03-29 00:59:23.876568 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-29 00:59:23.876572 | orchestrator | Sunday 29 March 2026 00:57:35 +0000 (0:00:00.410) 0:09:22.883 ********** 2026-03-29 00:59:23.876576 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.876580 | orchestrator | 2026-03-29 00:59:23.876585 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-29 00:59:23.876589 | orchestrator | Sunday 29 March 2026 00:57:37 +0000 (0:00:02.043) 0:09:24.927 ********** 2026-03-29 00:59:23.876594 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-29 00:59:23.876600 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876604 | orchestrator | 2026-03-29 00:59:23.876608 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-29 00:59:23.876617 | orchestrator | Sunday 29 March 2026 00:57:38 +0000 (0:00:00.553) 0:09:25.480 ********** 2026-03-29 00:59:23.876622 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:59:23.876629 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 00:59:23.876634 | orchestrator | 2026-03-29 00:59:23.876638 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-29 00:59:23.876642 | orchestrator | Sunday 29 March 2026 00:57:46 +0000 (0:00:08.686) 0:09:34.167 ********** 2026-03-29 00:59:23.876646 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 00:59:23.876650 | orchestrator | 2026-03-29 00:59:23.876657 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-29 00:59:23.876661 | orchestrator | Sunday 29 March 2026 00:57:50 +0000 (0:00:03.797) 0:09:37.964 ********** 2026-03-29 00:59:23.876666 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.876670 | orchestrator | 2026-03-29 00:59:23.876674 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-29 00:59:23.876678 | orchestrator | Sunday 29 March 2026 00:57:51 +0000 (0:00:00.560) 0:09:38.525 ********** 2026-03-29 00:59:23.876685 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 00:59:23.876689 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 00:59:23.876693 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-29 00:59:23.876697 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-29 00:59:23.876701 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-29 00:59:23.876706 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-29 00:59:23.876710 | orchestrator | 2026-03-29 00:59:23.876714 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-29 00:59:23.876718 | orchestrator | Sunday 29 March 2026 00:57:52 +0000 (0:00:01.068) 0:09:39.593 ********** 2026-03-29 00:59:23.876722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.876726 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:59:23.876730 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:59:23.876734 | orchestrator | 2026-03-29 00:59:23.876739 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:59:23.876743 | orchestrator | Sunday 29 March 2026 00:57:54 +0000 (0:00:02.434) 0:09:42.028 ********** 2026-03-29 00:59:23.876747 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:59:23.876751 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:59:23.876755 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.876759 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:59:23.876764 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 00:59:23.876768 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.876772 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:59:23.876776 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 00:59:23.876780 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.876784 | orchestrator | 2026-03-29 00:59:23.876788 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-29 00:59:23.876793 | orchestrator | Sunday 29 March 2026 00:57:56 +0000 (0:00:01.737) 0:09:43.766 ********** 2026-03-29 00:59:23.876800 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.876804 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.876808 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.876812 | orchestrator | 2026-03-29 00:59:23.876816 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-29 00:59:23.876820 | orchestrator | Sunday 29 March 2026 00:57:59 +0000 (0:00:02.596) 0:09:46.362 ********** 2026-03-29 00:59:23.876825 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.876829 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.876833 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.876837 | orchestrator | 2026-03-29 00:59:23.876841 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-29 00:59:23.876845 | orchestrator | Sunday 29 March 2026 00:57:59 +0000 (0:00:00.295) 0:09:46.658 ********** 2026-03-29 00:59:23.876849 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.876854 | orchestrator | 2026-03-29 00:59:23.876858 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-29 00:59:23.876862 | orchestrator | Sunday 29 March 2026 00:58:00 +0000 (0:00:00.700) 0:09:47.358 ********** 2026-03-29 00:59:23.876866 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.876870 | orchestrator | 2026-03-29 00:59:23.876874 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-29 00:59:23.876878 | orchestrator | Sunday 29 March 2026 00:58:00 +0000 (0:00:00.483) 0:09:47.841 ********** 2026-03-29 00:59:23.876883 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.876887 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.876891 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.876895 | orchestrator | 2026-03-29 00:59:23.876899 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-29 00:59:23.876903 | orchestrator | Sunday 29 March 2026 00:58:01 +0000 (0:00:01.363) 0:09:49.205 ********** 2026-03-29 00:59:23.876907 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.876911 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.876915 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.876920 | orchestrator | 2026-03-29 00:59:23.876924 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-29 00:59:23.876928 | orchestrator | Sunday 29 March 2026 00:58:03 +0000 (0:00:01.401) 0:09:50.607 ********** 2026-03-29 00:59:23.876932 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.876936 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.876940 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.876944 | orchestrator | 2026-03-29 00:59:23.876948 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-29 00:59:23.876953 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:01.903) 0:09:52.510 ********** 2026-03-29 00:59:23.876957 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.876961 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.876965 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.876969 | orchestrator | 2026-03-29 00:59:23.876975 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-29 00:59:23.876980 | orchestrator | Sunday 29 March 2026 00:58:07 +0000 (0:00:01.946) 0:09:54.456 ********** 2026-03-29 00:59:23.876984 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.876988 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.876992 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.876996 | orchestrator | 2026-03-29 00:59:23.877000 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:59:23.877004 | orchestrator | Sunday 29 March 2026 00:58:08 +0000 (0:00:01.258) 0:09:55.715 ********** 2026-03-29 00:59:23.877009 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.877015 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.877022 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.877026 | orchestrator | 2026-03-29 00:59:23.877030 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-29 00:59:23.877034 | orchestrator | Sunday 29 March 2026 00:58:09 +0000 (0:00:00.621) 0:09:56.336 ********** 2026-03-29 00:59:23.877039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.877043 | orchestrator | 2026-03-29 00:59:23.877047 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-29 00:59:23.877051 | orchestrator | Sunday 29 March 2026 00:58:09 +0000 (0:00:00.842) 0:09:57.179 ********** 2026-03-29 00:59:23.877055 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877059 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877063 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877067 | orchestrator | 2026-03-29 00:59:23.877071 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-29 00:59:23.877076 | orchestrator | Sunday 29 March 2026 00:58:10 +0000 (0:00:00.322) 0:09:57.501 ********** 2026-03-29 00:59:23.877080 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.877084 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.877088 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.877092 | orchestrator | 2026-03-29 00:59:23.877096 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-29 00:59:23.877100 | orchestrator | Sunday 29 March 2026 00:58:11 +0000 (0:00:01.424) 0:09:58.926 ********** 2026-03-29 00:59:23.877104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.877108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.877113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.877117 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877121 | orchestrator | 2026-03-29 00:59:23.877125 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-29 00:59:23.877129 | orchestrator | Sunday 29 March 2026 00:58:12 +0000 (0:00:00.943) 0:09:59.869 ********** 2026-03-29 00:59:23.877133 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877137 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877141 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877146 | orchestrator | 2026-03-29 00:59:23.877150 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-29 00:59:23.877154 | orchestrator | 2026-03-29 00:59:23.877158 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-29 00:59:23.877162 | orchestrator | Sunday 29 March 2026 00:58:13 +0000 (0:00:00.822) 0:10:00.692 ********** 2026-03-29 00:59:23.877166 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.877170 | orchestrator | 2026-03-29 00:59:23.877174 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-29 00:59:23.877178 | orchestrator | Sunday 29 March 2026 00:58:13 +0000 (0:00:00.493) 0:10:01.186 ********** 2026-03-29 00:59:23.877183 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.877187 | orchestrator | 2026-03-29 00:59:23.877191 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-29 00:59:23.877195 | orchestrator | Sunday 29 March 2026 00:58:14 +0000 (0:00:00.716) 0:10:01.903 ********** 2026-03-29 00:59:23.877199 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877203 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877207 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877211 | orchestrator | 2026-03-29 00:59:23.877215 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-29 00:59:23.877219 | orchestrator | Sunday 29 March 2026 00:58:14 +0000 (0:00:00.328) 0:10:02.231 ********** 2026-03-29 00:59:23.877226 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877230 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877234 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877238 | orchestrator | 2026-03-29 00:59:23.877243 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-29 00:59:23.877247 | orchestrator | Sunday 29 March 2026 00:58:15 +0000 (0:00:00.838) 0:10:03.070 ********** 2026-03-29 00:59:23.877251 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877255 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877259 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877263 | orchestrator | 2026-03-29 00:59:23.877267 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-29 00:59:23.877271 | orchestrator | Sunday 29 March 2026 00:58:16 +0000 (0:00:01.033) 0:10:04.104 ********** 2026-03-29 00:59:23.877275 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877279 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877284 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877288 | orchestrator | 2026-03-29 00:59:23.877292 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-29 00:59:23.877296 | orchestrator | Sunday 29 March 2026 00:58:17 +0000 (0:00:00.720) 0:10:04.824 ********** 2026-03-29 00:59:23.877300 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877304 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877308 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877312 | orchestrator | 2026-03-29 00:59:23.877317 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-29 00:59:23.877323 | orchestrator | Sunday 29 March 2026 00:58:17 +0000 (0:00:00.305) 0:10:05.130 ********** 2026-03-29 00:59:23.877327 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877332 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877336 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877340 | orchestrator | 2026-03-29 00:59:23.877344 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-29 00:59:23.877348 | orchestrator | Sunday 29 March 2026 00:58:18 +0000 (0:00:00.345) 0:10:05.475 ********** 2026-03-29 00:59:23.877352 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877356 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877360 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877365 | orchestrator | 2026-03-29 00:59:23.877371 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-29 00:59:23.877375 | orchestrator | Sunday 29 March 2026 00:58:18 +0000 (0:00:00.619) 0:10:06.094 ********** 2026-03-29 00:59:23.877379 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877383 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877387 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877391 | orchestrator | 2026-03-29 00:59:23.877396 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-29 00:59:23.877400 | orchestrator | Sunday 29 March 2026 00:58:19 +0000 (0:00:00.839) 0:10:06.934 ********** 2026-03-29 00:59:23.877404 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877408 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877412 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877416 | orchestrator | 2026-03-29 00:59:23.877420 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-29 00:59:23.877424 | orchestrator | Sunday 29 March 2026 00:58:20 +0000 (0:00:00.826) 0:10:07.760 ********** 2026-03-29 00:59:23.877429 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877433 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877437 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877441 | orchestrator | 2026-03-29 00:59:23.877445 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-29 00:59:23.877460 | orchestrator | Sunday 29 March 2026 00:58:20 +0000 (0:00:00.309) 0:10:08.070 ********** 2026-03-29 00:59:23.877465 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877469 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877476 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877480 | orchestrator | 2026-03-29 00:59:23.877484 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-29 00:59:23.877488 | orchestrator | Sunday 29 March 2026 00:58:21 +0000 (0:00:00.572) 0:10:08.643 ********** 2026-03-29 00:59:23.877492 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877496 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877500 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877504 | orchestrator | 2026-03-29 00:59:23.877508 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-29 00:59:23.877513 | orchestrator | Sunday 29 March 2026 00:58:21 +0000 (0:00:00.331) 0:10:08.975 ********** 2026-03-29 00:59:23.877517 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877521 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877525 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877529 | orchestrator | 2026-03-29 00:59:23.877533 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-29 00:59:23.877537 | orchestrator | Sunday 29 March 2026 00:58:21 +0000 (0:00:00.353) 0:10:09.328 ********** 2026-03-29 00:59:23.877541 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877545 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877549 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877553 | orchestrator | 2026-03-29 00:59:23.877557 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-29 00:59:23.877561 | orchestrator | Sunday 29 March 2026 00:58:22 +0000 (0:00:00.330) 0:10:09.659 ********** 2026-03-29 00:59:23.877566 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877570 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877574 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877578 | orchestrator | 2026-03-29 00:59:23.877582 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-29 00:59:23.877586 | orchestrator | Sunday 29 March 2026 00:58:22 +0000 (0:00:00.574) 0:10:10.233 ********** 2026-03-29 00:59:23.877590 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877595 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877599 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877603 | orchestrator | 2026-03-29 00:59:23.877607 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-29 00:59:23.877611 | orchestrator | Sunday 29 March 2026 00:58:23 +0000 (0:00:00.329) 0:10:10.563 ********** 2026-03-29 00:59:23.877615 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877619 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877623 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877627 | orchestrator | 2026-03-29 00:59:23.877631 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-29 00:59:23.877636 | orchestrator | Sunday 29 March 2026 00:58:23 +0000 (0:00:00.344) 0:10:10.907 ********** 2026-03-29 00:59:23.877640 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877644 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877648 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877652 | orchestrator | 2026-03-29 00:59:23.877656 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-29 00:59:23.877660 | orchestrator | Sunday 29 March 2026 00:58:23 +0000 (0:00:00.355) 0:10:11.263 ********** 2026-03-29 00:59:23.877664 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.877668 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.877672 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.877677 | orchestrator | 2026-03-29 00:59:23.877681 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-29 00:59:23.877685 | orchestrator | Sunday 29 March 2026 00:58:24 +0000 (0:00:00.842) 0:10:12.106 ********** 2026-03-29 00:59:23.877689 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.877693 | orchestrator | 2026-03-29 00:59:23.877697 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-29 00:59:23.877704 | orchestrator | Sunday 29 March 2026 00:58:25 +0000 (0:00:00.532) 0:10:12.638 ********** 2026-03-29 00:59:23.877710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.877715 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:59:23.877719 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:59:23.877723 | orchestrator | 2026-03-29 00:59:23.877727 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:59:23.877732 | orchestrator | Sunday 29 March 2026 00:58:27 +0000 (0:00:02.137) 0:10:14.775 ********** 2026-03-29 00:59:23.877736 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:59:23.877742 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-29 00:59:23.877746 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.877750 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:59:23.877755 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-29 00:59:23.877759 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.877763 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:59:23.877767 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-29 00:59:23.877771 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.877775 | orchestrator | 2026-03-29 00:59:23.877779 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-29 00:59:23.877783 | orchestrator | Sunday 29 March 2026 00:58:28 +0000 (0:00:01.529) 0:10:16.305 ********** 2026-03-29 00:59:23.877788 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.877792 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.877796 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.877807 | orchestrator | 2026-03-29 00:59:23.877812 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-29 00:59:23.877824 | orchestrator | Sunday 29 March 2026 00:58:29 +0000 (0:00:00.326) 0:10:16.631 ********** 2026-03-29 00:59:23.877830 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.877834 | orchestrator | 2026-03-29 00:59:23.877839 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-29 00:59:23.877843 | orchestrator | Sunday 29 March 2026 00:58:29 +0000 (0:00:00.525) 0:10:17.157 ********** 2026-03-29 00:59:23.877847 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.877851 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.877855 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.877860 | orchestrator | 2026-03-29 00:59:23.877864 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-29 00:59:23.877868 | orchestrator | Sunday 29 March 2026 00:58:31 +0000 (0:00:01.479) 0:10:18.637 ********** 2026-03-29 00:59:23.877872 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.877876 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 00:59:23.877880 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.877885 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 00:59:23.877889 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.877893 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-29 00:59:23.877900 | orchestrator | 2026-03-29 00:59:23.877904 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-29 00:59:23.877908 | orchestrator | Sunday 29 March 2026 00:58:36 +0000 (0:00:04.765) 0:10:23.402 ********** 2026-03-29 00:59:23.877912 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.877916 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:59:23.877920 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.877924 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:59:23.877928 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 00:59:23.877933 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 00:59:23.877937 | orchestrator | 2026-03-29 00:59:23.877942 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-29 00:59:23.877949 | orchestrator | Sunday 29 March 2026 00:58:38 +0000 (0:00:02.327) 0:10:25.730 ********** 2026-03-29 00:59:23.877955 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 00:59:23.877963 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.877974 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 00:59:23.877980 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.877987 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 00:59:23.877994 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.878000 | orchestrator | 2026-03-29 00:59:23.878006 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-29 00:59:23.878037 | orchestrator | Sunday 29 March 2026 00:58:39 +0000 (0:00:01.214) 0:10:26.944 ********** 2026-03-29 00:59:23.878051 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-29 00:59:23.878058 | orchestrator | 2026-03-29 00:59:23.878065 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-29 00:59:23.878072 | orchestrator | Sunday 29 March 2026 00:58:39 +0000 (0:00:00.216) 0:10:27.161 ********** 2026-03-29 00:59:23.878079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878116 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.878121 | orchestrator | 2026-03-29 00:59:23.878125 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-29 00:59:23.878129 | orchestrator | Sunday 29 March 2026 00:58:41 +0000 (0:00:01.249) 0:10:28.411 ********** 2026-03-29 00:59:23.878133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-29 00:59:23.878157 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.878161 | orchestrator | 2026-03-29 00:59:23.878166 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-29 00:59:23.878170 | orchestrator | Sunday 29 March 2026 00:58:41 +0000 (0:00:00.618) 0:10:29.029 ********** 2026-03-29 00:59:23.878174 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:59:23.878178 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:59:23.878182 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:59:23.878186 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:59:23.878190 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-29 00:59:23.878195 | orchestrator | 2026-03-29 00:59:23.878199 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-29 00:59:23.878203 | orchestrator | Sunday 29 March 2026 00:59:09 +0000 (0:00:27.874) 0:10:56.904 ********** 2026-03-29 00:59:23.878209 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.878218 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.878228 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.878234 | orchestrator | 2026-03-29 00:59:23.878240 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-29 00:59:23.878246 | orchestrator | Sunday 29 March 2026 00:59:09 +0000 (0:00:00.342) 0:10:57.246 ********** 2026-03-29 00:59:23.878252 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.878258 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.878265 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.878271 | orchestrator | 2026-03-29 00:59:23.878277 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-29 00:59:23.878283 | orchestrator | Sunday 29 March 2026 00:59:10 +0000 (0:00:00.301) 0:10:57.547 ********** 2026-03-29 00:59:23.878289 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.878294 | orchestrator | 2026-03-29 00:59:23.878300 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-29 00:59:23.878306 | orchestrator | Sunday 29 March 2026 00:59:10 +0000 (0:00:00.659) 0:10:58.207 ********** 2026-03-29 00:59:23.878312 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.878319 | orchestrator | 2026-03-29 00:59:23.878325 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-29 00:59:23.878331 | orchestrator | Sunday 29 March 2026 00:59:11 +0000 (0:00:00.533) 0:10:58.740 ********** 2026-03-29 00:59:23.878341 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.878348 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.878355 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.878362 | orchestrator | 2026-03-29 00:59:23.878369 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-29 00:59:23.878376 | orchestrator | Sunday 29 March 2026 00:59:12 +0000 (0:00:01.204) 0:10:59.945 ********** 2026-03-29 00:59:23.878383 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.878390 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.878395 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.878400 | orchestrator | 2026-03-29 00:59:23.878407 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-29 00:59:23.878415 | orchestrator | Sunday 29 March 2026 00:59:14 +0000 (0:00:01.623) 0:11:01.569 ********** 2026-03-29 00:59:23.878419 | orchestrator | changed: [testbed-node-4] 2026-03-29 00:59:23.878423 | orchestrator | changed: [testbed-node-3] 2026-03-29 00:59:23.878427 | orchestrator | changed: [testbed-node-5] 2026-03-29 00:59:23.878431 | orchestrator | 2026-03-29 00:59:23.878436 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-29 00:59:23.878440 | orchestrator | Sunday 29 March 2026 00:59:16 +0000 (0:00:02.085) 0:11:03.654 ********** 2026-03-29 00:59:23.878444 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.878478 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.878482 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-29 00:59:23.878487 | orchestrator | 2026-03-29 00:59:23.878491 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-29 00:59:23.878495 | orchestrator | Sunday 29 March 2026 00:59:18 +0000 (0:00:02.638) 0:11:06.293 ********** 2026-03-29 00:59:23.878499 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.878503 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.878507 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.878511 | orchestrator | 2026-03-29 00:59:23.878516 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-29 00:59:23.878520 | orchestrator | Sunday 29 March 2026 00:59:19 +0000 (0:00:00.344) 0:11:06.638 ********** 2026-03-29 00:59:23.878524 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 00:59:23.878528 | orchestrator | 2026-03-29 00:59:23.878532 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-29 00:59:23.878536 | orchestrator | Sunday 29 March 2026 00:59:19 +0000 (0:00:00.513) 0:11:07.151 ********** 2026-03-29 00:59:23.878540 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.878545 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.878549 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.878553 | orchestrator | 2026-03-29 00:59:23.878557 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-29 00:59:23.878561 | orchestrator | Sunday 29 March 2026 00:59:20 +0000 (0:00:00.591) 0:11:07.743 ********** 2026-03-29 00:59:23.878566 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.878570 | orchestrator | skipping: [testbed-node-4] 2026-03-29 00:59:23.878574 | orchestrator | skipping: [testbed-node-5] 2026-03-29 00:59:23.878578 | orchestrator | 2026-03-29 00:59:23.878582 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-29 00:59:23.878586 | orchestrator | Sunday 29 March 2026 00:59:20 +0000 (0:00:00.339) 0:11:08.083 ********** 2026-03-29 00:59:23.878590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 00:59:23.878594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 00:59:23.878599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 00:59:23.878603 | orchestrator | skipping: [testbed-node-3] 2026-03-29 00:59:23.878607 | orchestrator | 2026-03-29 00:59:23.878611 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-29 00:59:23.878615 | orchestrator | Sunday 29 March 2026 00:59:21 +0000 (0:00:00.600) 0:11:08.683 ********** 2026-03-29 00:59:23.878620 | orchestrator | ok: [testbed-node-3] 2026-03-29 00:59:23.878624 | orchestrator | ok: [testbed-node-4] 2026-03-29 00:59:23.878628 | orchestrator | ok: [testbed-node-5] 2026-03-29 00:59:23.878632 | orchestrator | 2026-03-29 00:59:23.878636 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:59:23.878640 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-29 00:59:23.878650 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-29 00:59:23.878654 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-29 00:59:23.878658 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-29 00:59:23.878663 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-29 00:59:23.878667 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-29 00:59:23.878671 | orchestrator | 2026-03-29 00:59:23.878675 | orchestrator | 2026-03-29 00:59:23.878679 | orchestrator | 2026-03-29 00:59:23.878686 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:59:23.878691 | orchestrator | Sunday 29 March 2026 00:59:21 +0000 (0:00:00.275) 0:11:08.958 ********** 2026-03-29 00:59:23.878695 | orchestrator | =============================================================================== 2026-03-29 00:59:23.878699 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 69.26s 2026-03-29 00:59:23.878703 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.12s 2026-03-29 00:59:23.878710 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 27.87s 2026-03-29 00:59:23.878714 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.56s 2026-03-29 00:59:23.878718 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.06s 2026-03-29 00:59:23.878722 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.43s 2026-03-29 00:59:23.878726 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.12s 2026-03-29 00:59:23.878730 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.46s 2026-03-29 00:59:23.878735 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.11s 2026-03-29 00:59:23.878739 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.69s 2026-03-29 00:59:23.878743 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.75s 2026-03-29 00:59:23.878747 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.83s 2026-03-29 00:59:23.878751 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.94s 2026-03-29 00:59:23.878755 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.77s 2026-03-29 00:59:23.878759 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.15s 2026-03-29 00:59:23.878763 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.97s 2026-03-29 00:59:23.878768 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.93s 2026-03-29 00:59:23.878772 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.80s 2026-03-29 00:59:23.878776 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.78s 2026-03-29 00:59:23.878780 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 3.67s 2026-03-29 00:59:23.878784 | orchestrator | 2026-03-29 00:59:23 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:23.878788 | orchestrator | 2026-03-29 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:26.901005 | orchestrator | 2026-03-29 00:59:26 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:26.902118 | orchestrator | 2026-03-29 00:59:26 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:26.903486 | orchestrator | 2026-03-29 00:59:26 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:26.903535 | orchestrator | 2026-03-29 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:29.940720 | orchestrator | 2026-03-29 00:59:29 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:29.942897 | orchestrator | 2026-03-29 00:59:29 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:29.945148 | orchestrator | 2026-03-29 00:59:29 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:29.945288 | orchestrator | 2026-03-29 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:32.992214 | orchestrator | 2026-03-29 00:59:32 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:32.993653 | orchestrator | 2026-03-29 00:59:32 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:32.995990 | orchestrator | 2026-03-29 00:59:32 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:32.996034 | orchestrator | 2026-03-29 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:36.043688 | orchestrator | 2026-03-29 00:59:36 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:36.046076 | orchestrator | 2026-03-29 00:59:36 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state STARTED 2026-03-29 00:59:36.047394 | orchestrator | 2026-03-29 00:59:36 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:36.047640 | orchestrator | 2026-03-29 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:39.084607 | orchestrator | 2026-03-29 00:59:39 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:39.084693 | orchestrator | 2026-03-29 00:59:39 | INFO  | Task a5477a7b-cfe5-4227-a8c7-7872b3f37e35 is in state SUCCESS 2026-03-29 00:59:39.086087 | orchestrator | 2026-03-29 00:59:39.086136 | orchestrator | 2026-03-29 00:59:39.086144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:59:39.086152 | orchestrator | 2026-03-29 00:59:39.086159 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:59:39.086166 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.272) 0:00:00.272 ********** 2026-03-29 00:59:39.086174 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:39.086182 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:39.086188 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:39.086195 | orchestrator | 2026-03-29 00:59:39.086218 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:59:39.086226 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.292) 0:00:00.564 ********** 2026-03-29 00:59:39.086232 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-29 00:59:39.086239 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-29 00:59:39.086246 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-29 00:59:39.086252 | orchestrator | 2026-03-29 00:59:39.086260 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-29 00:59:39.086266 | orchestrator | 2026-03-29 00:59:39.086273 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:59:39.086280 | orchestrator | Sunday 29 March 2026 00:56:56 +0000 (0:00:00.421) 0:00:00.985 ********** 2026-03-29 00:59:39.086285 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:39.086309 | orchestrator | 2026-03-29 00:59:39.086317 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-29 00:59:39.086322 | orchestrator | Sunday 29 March 2026 00:56:56 +0000 (0:00:00.451) 0:00:01.437 ********** 2026-03-29 00:59:39.086329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:59:39.086335 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:59:39.086340 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-29 00:59:39.086346 | orchestrator | 2026-03-29 00:59:39.086352 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-29 00:59:39.086359 | orchestrator | Sunday 29 March 2026 00:56:57 +0000 (0:00:00.701) 0:00:02.138 ********** 2026-03-29 00:59:39.086368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086511 | orchestrator | 2026-03-29 00:59:39.086515 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:59:39.086522 | orchestrator | Sunday 29 March 2026 00:56:59 +0000 (0:00:01.786) 0:00:03.925 ********** 2026-03-29 00:59:39.086529 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:39.086538 | orchestrator | 2026-03-29 00:59:39.086545 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-29 00:59:39.086550 | orchestrator | Sunday 29 March 2026 00:56:59 +0000 (0:00:00.621) 0:00:04.546 ********** 2026-03-29 00:59:39.086570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086631 | orchestrator | 2026-03-29 00:59:39.086637 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-29 00:59:39.086644 | orchestrator | Sunday 29 March 2026 00:57:02 +0000 (0:00:02.482) 0:00:07.028 ********** 2026-03-29 00:59:39.086651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:59:39.086658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:59:39.086665 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:39.086677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:59:39.086693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:59:39.086700 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:39.086707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:59:39.086714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:59:39.086719 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:39.086724 | orchestrator | 2026-03-29 00:59:39.086729 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-29 00:59:39.086733 | orchestrator | Sunday 29 March 2026 00:57:03 +0000 (0:00:01.225) 0:00:08.254 ********** 2026-03-29 00:59:39.086740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:59:39.086751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:59:39.086755 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:39.086759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:59:39.086764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:59:39.086768 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:39.086775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-29 00:59:39.086791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-29 00:59:39.086795 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:39.086799 | orchestrator | 2026-03-29 00:59:39.086803 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-29 00:59:39.086807 | orchestrator | Sunday 29 March 2026 00:57:04 +0000 (0:00:00.955) 0:00:09.210 ********** 2026-03-29 00:59:39.086811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086845 | orchestrator | 2026-03-29 00:59:39.086849 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-29 00:59:39.086853 | orchestrator | Sunday 29 March 2026 00:57:06 +0000 (0:00:02.159) 0:00:11.369 ********** 2026-03-29 00:59:39.086857 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:39.086861 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:39.086868 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:39.086872 | orchestrator | 2026-03-29 00:59:39.086875 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-29 00:59:39.086879 | orchestrator | Sunday 29 March 2026 00:57:09 +0000 (0:00:02.866) 0:00:14.235 ********** 2026-03-29 00:59:39.086883 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:39.086887 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:39.086891 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:39.086894 | orchestrator | 2026-03-29 00:59:39.086898 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-29 00:59:39.086902 | orchestrator | Sunday 29 March 2026 00:57:11 +0000 (0:00:02.244) 0:00:16.480 ********** 2026-03-29 00:59:39.086915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-29 00:59:39.086928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-29 00:59:39.086950 | orchestrator | 2026-03-29 00:59:39.086954 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:59:39.086958 | orchestrator | Sunday 29 March 2026 00:57:13 +0000 (0:00:01.970) 0:00:18.450 ********** 2026-03-29 00:59:39.086962 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:39.086965 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:39.086969 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:39.086973 | orchestrator | 2026-03-29 00:59:39.086977 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 00:59:39.086981 | orchestrator | Sunday 29 March 2026 00:57:14 +0000 (0:00:00.283) 0:00:18.733 ********** 2026-03-29 00:59:39.086984 | orchestrator | 2026-03-29 00:59:39.086988 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 00:59:39.086992 | orchestrator | Sunday 29 March 2026 00:57:14 +0000 (0:00:00.063) 0:00:18.797 ********** 2026-03-29 00:59:39.086996 | orchestrator | 2026-03-29 00:59:39.086999 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-29 00:59:39.087003 | orchestrator | Sunday 29 March 2026 00:57:14 +0000 (0:00:00.074) 0:00:18.872 ********** 2026-03-29 00:59:39.087007 | orchestrator | 2026-03-29 00:59:39.087011 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-29 00:59:39.087014 | orchestrator | Sunday 29 March 2026 00:57:14 +0000 (0:00:00.079) 0:00:18.951 ********** 2026-03-29 00:59:39.087024 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:39.087028 | orchestrator | 2026-03-29 00:59:39.087032 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-29 00:59:39.087036 | orchestrator | Sunday 29 March 2026 00:57:14 +0000 (0:00:00.651) 0:00:19.603 ********** 2026-03-29 00:59:39.087039 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:39.087043 | orchestrator | 2026-03-29 00:59:39.087047 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-29 00:59:39.087051 | orchestrator | Sunday 29 March 2026 00:57:15 +0000 (0:00:00.218) 0:00:19.822 ********** 2026-03-29 00:59:39.087055 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:39.087058 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:39.087062 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:39.087066 | orchestrator | 2026-03-29 00:59:39.087070 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-29 00:59:39.087073 | orchestrator | Sunday 29 March 2026 00:58:17 +0000 (0:01:02.359) 0:01:22.181 ********** 2026-03-29 00:59:39.087077 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:39.087081 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:39.087085 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:39.087089 | orchestrator | 2026-03-29 00:59:39.087092 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-29 00:59:39.087096 | orchestrator | Sunday 29 March 2026 00:59:26 +0000 (0:01:08.840) 0:02:31.022 ********** 2026-03-29 00:59:39.087100 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:39.087104 | orchestrator | 2026-03-29 00:59:39.087108 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-29 00:59:39.087111 | orchestrator | Sunday 29 March 2026 00:59:26 +0000 (0:00:00.568) 0:02:31.591 ********** 2026-03-29 00:59:39.087115 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:39.087119 | orchestrator | 2026-03-29 00:59:39.087123 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-29 00:59:39.087126 | orchestrator | Sunday 29 March 2026 00:59:29 +0000 (0:00:02.943) 0:02:34.535 ********** 2026-03-29 00:59:39.087130 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:39.087134 | orchestrator | 2026-03-29 00:59:39.087138 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-29 00:59:39.087142 | orchestrator | Sunday 29 March 2026 00:59:32 +0000 (0:00:02.444) 0:02:36.979 ********** 2026-03-29 00:59:39.087145 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:39.087149 | orchestrator | 2026-03-29 00:59:39.087153 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-29 00:59:39.087157 | orchestrator | Sunday 29 March 2026 00:59:34 +0000 (0:00:02.464) 0:02:39.443 ********** 2026-03-29 00:59:39.087160 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:39.087164 | orchestrator | 2026-03-29 00:59:39.087171 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:59:39.087176 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 00:59:39.087182 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:59:39.087192 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 00:59:39.087197 | orchestrator | 2026-03-29 00:59:39.087203 | orchestrator | 2026-03-29 00:59:39.087213 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:59:39.087220 | orchestrator | Sunday 29 March 2026 00:59:37 +0000 (0:00:02.224) 0:02:41.668 ********** 2026-03-29 00:59:39.087225 | orchestrator | =============================================================================== 2026-03-29 00:59:39.087230 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 68.84s 2026-03-29 00:59:39.087241 | orchestrator | opensearch : Restart opensearch container ------------------------------ 62.36s 2026-03-29 00:59:39.087246 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.94s 2026-03-29 00:59:39.087252 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.87s 2026-03-29 00:59:39.087257 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.48s 2026-03-29 00:59:39.087263 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.46s 2026-03-29 00:59:39.087268 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.44s 2026-03-29 00:59:39.087274 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.24s 2026-03-29 00:59:39.087279 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.22s 2026-03-29 00:59:39.087284 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.16s 2026-03-29 00:59:39.087289 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.97s 2026-03-29 00:59:39.087295 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.79s 2026-03-29 00:59:39.087300 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.23s 2026-03-29 00:59:39.087307 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.96s 2026-03-29 00:59:39.087312 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2026-03-29 00:59:39.087319 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.65s 2026-03-29 00:59:39.087324 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-03-29 00:59:39.087330 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-29 00:59:39.087336 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2026-03-29 00:59:39.087342 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-29 00:59:39.087348 | orchestrator | 2026-03-29 00:59:39 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:39.087354 | orchestrator | 2026-03-29 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:42.121417 | orchestrator | 2026-03-29 00:59:42 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:42.122344 | orchestrator | 2026-03-29 00:59:42 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:42.122409 | orchestrator | 2026-03-29 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:45.159056 | orchestrator | 2026-03-29 00:59:45 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state STARTED 2026-03-29 00:59:45.159161 | orchestrator | 2026-03-29 00:59:45 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:45.159179 | orchestrator | 2026-03-29 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:48.189732 | orchestrator | 2026-03-29 00:59:48 | INFO  | Task ced64d28-fe5a-4313-95aa-34a7f4af5282 is in state SUCCESS 2026-03-29 00:59:48.193114 | orchestrator | 2026-03-29 00:59:48.193191 | orchestrator | 2026-03-29 00:59:48.193203 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-29 00:59:48.193211 | orchestrator | 2026-03-29 00:59:48.193217 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-29 00:59:48.193224 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.083) 0:00:00.083 ********** 2026-03-29 00:59:48.193231 | orchestrator | ok: [localhost] => { 2026-03-29 00:59:48.193238 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-29 00:59:48.193245 | orchestrator | } 2026-03-29 00:59:48.193252 | orchestrator | 2026-03-29 00:59:48.193296 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-29 00:59:48.193506 | orchestrator | Sunday 29 March 2026 00:56:55 +0000 (0:00:00.043) 0:00:00.126 ********** 2026-03-29 00:59:48.193512 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-29 00:59:48.193519 | orchestrator | ...ignoring 2026-03-29 00:59:48.193523 | orchestrator | 2026-03-29 00:59:48.193527 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-29 00:59:48.193531 | orchestrator | Sunday 29 March 2026 00:56:58 +0000 (0:00:02.766) 0:00:02.893 ********** 2026-03-29 00:59:48.193535 | orchestrator | skipping: [localhost] 2026-03-29 00:59:48.193539 | orchestrator | 2026-03-29 00:59:48.193543 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-29 00:59:48.193546 | orchestrator | Sunday 29 March 2026 00:56:58 +0000 (0:00:00.042) 0:00:02.936 ********** 2026-03-29 00:59:48.193561 | orchestrator | ok: [localhost] 2026-03-29 00:59:48.193565 | orchestrator | 2026-03-29 00:59:48.193569 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 00:59:48.193572 | orchestrator | 2026-03-29 00:59:48.193576 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 00:59:48.193580 | orchestrator | Sunday 29 March 2026 00:56:58 +0000 (0:00:00.137) 0:00:03.073 ********** 2026-03-29 00:59:48.193584 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.193588 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.193591 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.193595 | orchestrator | 2026-03-29 00:59:48.193599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 00:59:48.193604 | orchestrator | Sunday 29 March 2026 00:56:58 +0000 (0:00:00.268) 0:00:03.341 ********** 2026-03-29 00:59:48.193610 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-29 00:59:48.193617 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-29 00:59:48.193623 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-29 00:59:48.193629 | orchestrator | 2026-03-29 00:59:48.193635 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-29 00:59:48.193642 | orchestrator | 2026-03-29 00:59:48.193648 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-29 00:59:48.193655 | orchestrator | Sunday 29 March 2026 00:56:59 +0000 (0:00:00.490) 0:00:03.832 ********** 2026-03-29 00:59:48.193662 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 00:59:48.193668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 00:59:48.193674 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 00:59:48.193680 | orchestrator | 2026-03-29 00:59:48.193687 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:59:48.193697 | orchestrator | Sunday 29 March 2026 00:56:59 +0000 (0:00:00.398) 0:00:04.230 ********** 2026-03-29 00:59:48.193708 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:48.193716 | orchestrator | 2026-03-29 00:59:48.193722 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-29 00:59:48.193729 | orchestrator | Sunday 29 March 2026 00:57:00 +0000 (0:00:00.549) 0:00:04.780 ********** 2026-03-29 00:59:48.193760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.193785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.193794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.193806 | orchestrator | 2026-03-29 00:59:48.193818 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-29 00:59:48.193822 | orchestrator | Sunday 29 March 2026 00:57:03 +0000 (0:00:03.129) 0:00:07.910 ********** 2026-03-29 00:59:48.193826 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.193830 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.193834 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.193837 | orchestrator | 2026-03-29 00:59:48.193841 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-29 00:59:48.193845 | orchestrator | Sunday 29 March 2026 00:57:03 +0000 (0:00:00.616) 0:00:08.526 ********** 2026-03-29 00:59:48.193849 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.193852 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.193856 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.193860 | orchestrator | 2026-03-29 00:59:48.193864 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-29 00:59:48.193868 | orchestrator | Sunday 29 March 2026 00:57:05 +0000 (0:00:01.428) 0:00:09.955 ********** 2026-03-29 00:59:48.193875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.193884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.193896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.193900 | orchestrator | 2026-03-29 00:59:48.193904 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-29 00:59:48.193908 | orchestrator | Sunday 29 March 2026 00:57:08 +0000 (0:00:03.479) 0:00:13.434 ********** 2026-03-29 00:59:48.193912 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.193916 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.193920 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.193924 | orchestrator | 2026-03-29 00:59:48.193927 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-29 00:59:48.193935 | orchestrator | Sunday 29 March 2026 00:57:09 +0000 (0:00:01.089) 0:00:14.524 ********** 2026-03-29 00:59:48.193939 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.193943 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:48.193949 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:48.193955 | orchestrator | 2026-03-29 00:59:48.193960 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:59:48.193966 | orchestrator | Sunday 29 March 2026 00:57:14 +0000 (0:00:04.695) 0:00:19.220 ********** 2026-03-29 00:59:48.193971 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:48.193977 | orchestrator | 2026-03-29 00:59:48.193982 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-29 00:59:48.193988 | orchestrator | Sunday 29 March 2026 00:57:15 +0000 (0:00:00.516) 0:00:19.737 ********** 2026-03-29 00:59:48.194000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194007 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194094 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194114 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194121 | orchestrator | 2026-03-29 00:59:48.194125 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-29 00:59:48.194129 | orchestrator | Sunday 29 March 2026 00:57:18 +0000 (0:00:03.313) 0:00:23.050 ********** 2026-03-29 00:59:48.194136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194151 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194164 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194178 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194182 | orchestrator | 2026-03-29 00:59:48.194186 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-29 00:59:48.194190 | orchestrator | Sunday 29 March 2026 00:57:21 +0000 (0:00:02.908) 0:00:25.958 ********** 2026-03-29 00:59:48.194197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194202 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194216 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-29 00:59:48.194225 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194228 | orchestrator | 2026-03-29 00:59:48.194232 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-29 00:59:48.194236 | orchestrator | Sunday 29 March 2026 00:57:23 +0000 (0:00:02.210) 0:00:28.169 ********** 2026-03-29 00:59:48.194244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.194289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.194301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-29 00:59:48.194309 | orchestrator | 2026-03-29 00:59:48.194313 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-29 00:59:48.194317 | orchestrator | Sunday 29 March 2026 00:57:26 +0000 (0:00:02.638) 0:00:30.808 ********** 2026-03-29 00:59:48.194320 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.194324 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:48.194328 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:48.194332 | orchestrator | 2026-03-29 00:59:48.194335 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-29 00:59:48.194339 | orchestrator | Sunday 29 March 2026 00:57:27 +0000 (0:00:00.921) 0:00:31.730 ********** 2026-03-29 00:59:48.194343 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.194347 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.194351 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.194355 | orchestrator | 2026-03-29 00:59:48.194359 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-29 00:59:48.194363 | orchestrator | Sunday 29 March 2026 00:57:27 +0000 (0:00:00.303) 0:00:32.033 ********** 2026-03-29 00:59:48.194366 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.194370 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.194374 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.194378 | orchestrator | 2026-03-29 00:59:48.194381 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-29 00:59:48.194385 | orchestrator | Sunday 29 March 2026 00:57:27 +0000 (0:00:00.282) 0:00:32.316 ********** 2026-03-29 00:59:48.194390 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-29 00:59:48.194395 | orchestrator | ...ignoring 2026-03-29 00:59:48.194399 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-29 00:59:48.194403 | orchestrator | ...ignoring 2026-03-29 00:59:48.194407 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-29 00:59:48.194411 | orchestrator | ...ignoring 2026-03-29 00:59:48.194414 | orchestrator | 2026-03-29 00:59:48.194418 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-29 00:59:48.194422 | orchestrator | Sunday 29 March 2026 00:57:38 +0000 (0:00:10.750) 0:00:43.066 ********** 2026-03-29 00:59:48.194489 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.194495 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.194499 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.194503 | orchestrator | 2026-03-29 00:59:48.194506 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-29 00:59:48.194510 | orchestrator | Sunday 29 March 2026 00:57:38 +0000 (0:00:00.423) 0:00:43.489 ********** 2026-03-29 00:59:48.194514 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194517 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194521 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194525 | orchestrator | 2026-03-29 00:59:48.194529 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-29 00:59:48.194532 | orchestrator | Sunday 29 March 2026 00:57:39 +0000 (0:00:00.685) 0:00:44.174 ********** 2026-03-29 00:59:48.194536 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194540 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194544 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194548 | orchestrator | 2026-03-29 00:59:48.194551 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-29 00:59:48.194555 | orchestrator | Sunday 29 March 2026 00:57:39 +0000 (0:00:00.434) 0:00:44.609 ********** 2026-03-29 00:59:48.194559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194563 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194570 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194574 | orchestrator | 2026-03-29 00:59:48.194578 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-29 00:59:48.194585 | orchestrator | Sunday 29 March 2026 00:57:40 +0000 (0:00:00.460) 0:00:45.069 ********** 2026-03-29 00:59:48.194589 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.194593 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.194596 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.194600 | orchestrator | 2026-03-29 00:59:48.194604 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-29 00:59:48.194608 | orchestrator | Sunday 29 March 2026 00:57:40 +0000 (0:00:00.406) 0:00:45.475 ********** 2026-03-29 00:59:48.194611 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194615 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194619 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194623 | orchestrator | 2026-03-29 00:59:48.194626 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:59:48.194630 | orchestrator | Sunday 29 March 2026 00:57:41 +0000 (0:00:00.682) 0:00:46.158 ********** 2026-03-29 00:59:48.194634 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194638 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194644 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-29 00:59:48.194652 | orchestrator | 2026-03-29 00:59:48.194657 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-29 00:59:48.194662 | orchestrator | Sunday 29 March 2026 00:57:41 +0000 (0:00:00.396) 0:00:46.554 ********** 2026-03-29 00:59:48.194668 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.194674 | orchestrator | 2026-03-29 00:59:48.194680 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-29 00:59:48.194689 | orchestrator | Sunday 29 March 2026 00:57:52 +0000 (0:00:10.105) 0:00:56.660 ********** 2026-03-29 00:59:48.194695 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.194701 | orchestrator | 2026-03-29 00:59:48.194706 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 00:59:48.194712 | orchestrator | Sunday 29 March 2026 00:57:52 +0000 (0:00:00.128) 0:00:56.789 ********** 2026-03-29 00:59:48.194718 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194725 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194731 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194737 | orchestrator | 2026-03-29 00:59:48.194744 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-29 00:59:48.194751 | orchestrator | Sunday 29 March 2026 00:57:53 +0000 (0:00:00.966) 0:00:57.756 ********** 2026-03-29 00:59:48.194757 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.194764 | orchestrator | 2026-03-29 00:59:48.194771 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-29 00:59:48.194778 | orchestrator | Sunday 29 March 2026 00:58:01 +0000 (0:00:07.988) 0:01:05.745 ********** 2026-03-29 00:59:48.194785 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.194791 | orchestrator | 2026-03-29 00:59:48.194797 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-29 00:59:48.194803 | orchestrator | Sunday 29 March 2026 00:58:02 +0000 (0:00:01.611) 0:01:07.356 ********** 2026-03-29 00:59:48.194810 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.194816 | orchestrator | 2026-03-29 00:59:48.194823 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-29 00:59:48.194829 | orchestrator | Sunday 29 March 2026 00:58:04 +0000 (0:00:02.278) 0:01:09.635 ********** 2026-03-29 00:59:48.194835 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.194842 | orchestrator | 2026-03-29 00:59:48.194846 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-29 00:59:48.194850 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:00.170) 0:01:09.805 ********** 2026-03-29 00:59:48.194853 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194862 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.194866 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.194870 | orchestrator | 2026-03-29 00:59:48.194873 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-29 00:59:48.194877 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:00.321) 0:01:10.126 ********** 2026-03-29 00:59:48.194881 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.194885 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-29 00:59:48.194888 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:48.194892 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:48.194896 | orchestrator | 2026-03-29 00:59:48.194900 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 00:59:48.194903 | orchestrator | skipping: no hosts matched 2026-03-29 00:59:48.194907 | orchestrator | 2026-03-29 00:59:48.194911 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 00:59:48.194915 | orchestrator | 2026-03-29 00:59:48.194918 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 00:59:48.194922 | orchestrator | Sunday 29 March 2026 00:58:05 +0000 (0:00:00.467) 0:01:10.594 ********** 2026-03-29 00:59:48.194926 | orchestrator | changed: [testbed-node-1] 2026-03-29 00:59:48.194930 | orchestrator | 2026-03-29 00:59:48.194933 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 00:59:48.194937 | orchestrator | Sunday 29 March 2026 00:58:21 +0000 (0:00:15.359) 0:01:25.953 ********** 2026-03-29 00:59:48.194941 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.194944 | orchestrator | 2026-03-29 00:59:48.194948 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 00:59:48.194952 | orchestrator | Sunday 29 March 2026 00:58:36 +0000 (0:00:15.594) 0:01:41.548 ********** 2026-03-29 00:59:48.194956 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.194960 | orchestrator | 2026-03-29 00:59:48.194963 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 00:59:48.194967 | orchestrator | 2026-03-29 00:59:48.194971 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 00:59:48.194975 | orchestrator | Sunday 29 March 2026 00:58:39 +0000 (0:00:02.579) 0:01:44.127 ********** 2026-03-29 00:59:48.194979 | orchestrator | changed: [testbed-node-2] 2026-03-29 00:59:48.194982 | orchestrator | 2026-03-29 00:59:48.194986 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 00:59:48.194997 | orchestrator | Sunday 29 March 2026 00:58:56 +0000 (0:00:17.451) 0:02:01.579 ********** 2026-03-29 00:59:48.195001 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.195004 | orchestrator | 2026-03-29 00:59:48.195010 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 00:59:48.195016 | orchestrator | Sunday 29 March 2026 00:59:12 +0000 (0:00:15.549) 0:02:17.128 ********** 2026-03-29 00:59:48.195021 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.195027 | orchestrator | 2026-03-29 00:59:48.195033 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-29 00:59:48.195038 | orchestrator | 2026-03-29 00:59:48.195043 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-29 00:59:48.195049 | orchestrator | Sunday 29 March 2026 00:59:14 +0000 (0:00:02.510) 0:02:19.639 ********** 2026-03-29 00:59:48.195054 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.195059 | orchestrator | 2026-03-29 00:59:48.195065 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-29 00:59:48.195070 | orchestrator | Sunday 29 March 2026 00:59:26 +0000 (0:00:11.708) 0:02:31.348 ********** 2026-03-29 00:59:48.195075 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.195081 | orchestrator | 2026-03-29 00:59:48.195086 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-29 00:59:48.195092 | orchestrator | Sunday 29 March 2026 00:59:31 +0000 (0:00:04.636) 0:02:35.985 ********** 2026-03-29 00:59:48.195103 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.195108 | orchestrator | 2026-03-29 00:59:48.195114 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-29 00:59:48.195120 | orchestrator | 2026-03-29 00:59:48.195126 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-29 00:59:48.195136 | orchestrator | Sunday 29 March 2026 00:59:34 +0000 (0:00:02.768) 0:02:38.753 ********** 2026-03-29 00:59:48.195143 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 00:59:48.195149 | orchestrator | 2026-03-29 00:59:48.195156 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-29 00:59:48.195162 | orchestrator | Sunday 29 March 2026 00:59:34 +0000 (0:00:00.532) 0:02:39.286 ********** 2026-03-29 00:59:48.195167 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.195171 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.195175 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.195179 | orchestrator | 2026-03-29 00:59:48.195182 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-29 00:59:48.195186 | orchestrator | Sunday 29 March 2026 00:59:36 +0000 (0:00:01.931) 0:02:41.218 ********** 2026-03-29 00:59:48.195190 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.195194 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.195197 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.195201 | orchestrator | 2026-03-29 00:59:48.195205 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-29 00:59:48.195209 | orchestrator | Sunday 29 March 2026 00:59:38 +0000 (0:00:02.213) 0:02:43.431 ********** 2026-03-29 00:59:48.195213 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.195216 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.195220 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.195224 | orchestrator | 2026-03-29 00:59:48.195227 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-29 00:59:48.195231 | orchestrator | Sunday 29 March 2026 00:59:41 +0000 (0:00:02.514) 0:02:45.945 ********** 2026-03-29 00:59:48.195235 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.195238 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.195242 | orchestrator | changed: [testbed-node-0] 2026-03-29 00:59:48.195246 | orchestrator | 2026-03-29 00:59:48.195250 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-29 00:59:48.195253 | orchestrator | Sunday 29 March 2026 00:59:43 +0000 (0:00:02.495) 0:02:48.441 ********** 2026-03-29 00:59:48.195257 | orchestrator | ok: [testbed-node-2] 2026-03-29 00:59:48.195261 | orchestrator | ok: [testbed-node-1] 2026-03-29 00:59:48.195264 | orchestrator | ok: [testbed-node-0] 2026-03-29 00:59:48.195268 | orchestrator | 2026-03-29 00:59:48.195272 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-29 00:59:48.195276 | orchestrator | Sunday 29 March 2026 00:59:47 +0000 (0:00:03.259) 0:02:51.700 ********** 2026-03-29 00:59:48.195279 | orchestrator | skipping: [testbed-node-0] 2026-03-29 00:59:48.195283 | orchestrator | skipping: [testbed-node-1] 2026-03-29 00:59:48.195287 | orchestrator | skipping: [testbed-node-2] 2026-03-29 00:59:48.195290 | orchestrator | 2026-03-29 00:59:48.195294 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 00:59:48.195298 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-29 00:59:48.195303 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-29 00:59:48.195308 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-29 00:59:48.195312 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-29 00:59:48.195319 | orchestrator | 2026-03-29 00:59:48.195323 | orchestrator | 2026-03-29 00:59:48.195327 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 00:59:48.195330 | orchestrator | Sunday 29 March 2026 00:59:47 +0000 (0:00:00.230) 0:02:51.931 ********** 2026-03-29 00:59:48.195334 | orchestrator | =============================================================================== 2026-03-29 00:59:48.195338 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.81s 2026-03-29 00:59:48.195342 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.14s 2026-03-29 00:59:48.195349 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.71s 2026-03-29 00:59:48.195353 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.75s 2026-03-29 00:59:48.195357 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.11s 2026-03-29 00:59:48.195361 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.99s 2026-03-29 00:59:48.195364 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.09s 2026-03-29 00:59:48.195368 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.70s 2026-03-29 00:59:48.195372 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.64s 2026-03-29 00:59:48.195375 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.48s 2026-03-29 00:59:48.195379 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.31s 2026-03-29 00:59:48.195383 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.26s 2026-03-29 00:59:48.195389 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.13s 2026-03-29 00:59:48.195395 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.91s 2026-03-29 00:59:48.195400 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.77s 2026-03-29 00:59:48.195411 | orchestrator | Check MariaDB service --------------------------------------------------- 2.77s 2026-03-29 00:59:48.195442 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.64s 2026-03-29 00:59:48.195450 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.51s 2026-03-29 00:59:48.195455 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.50s 2026-03-29 00:59:48.195460 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.28s 2026-03-29 00:59:48.195465 | orchestrator | 2026-03-29 00:59:48 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:48.195471 | orchestrator | 2026-03-29 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:51.234390 | orchestrator | 2026-03-29 00:59:51 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 00:59:51.235954 | orchestrator | 2026-03-29 00:59:51 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 00:59:51.237777 | orchestrator | 2026-03-29 00:59:51 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:51.238003 | orchestrator | 2026-03-29 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:54.270970 | orchestrator | 2026-03-29 00:59:54 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 00:59:54.276721 | orchestrator | 2026-03-29 00:59:54 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 00:59:54.278373 | orchestrator | 2026-03-29 00:59:54 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:54.278782 | orchestrator | 2026-03-29 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 00:59:57.327292 | orchestrator | 2026-03-29 00:59:57 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 00:59:57.328030 | orchestrator | 2026-03-29 00:59:57 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 00:59:57.328625 | orchestrator | 2026-03-29 00:59:57 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 00:59:57.328651 | orchestrator | 2026-03-29 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:00.370812 | orchestrator | 2026-03-29 01:00:00 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:00.370881 | orchestrator | 2026-03-29 01:00:00 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:00.370887 | orchestrator | 2026-03-29 01:00:00 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:00.370895 | orchestrator | 2026-03-29 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:03.405516 | orchestrator | 2026-03-29 01:00:03 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:03.407316 | orchestrator | 2026-03-29 01:00:03 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:03.408915 | orchestrator | 2026-03-29 01:00:03 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:03.408972 | orchestrator | 2026-03-29 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:06.454128 | orchestrator | 2026-03-29 01:00:06 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:06.454983 | orchestrator | 2026-03-29 01:00:06 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:06.456450 | orchestrator | 2026-03-29 01:00:06 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:06.456483 | orchestrator | 2026-03-29 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:09.492944 | orchestrator | 2026-03-29 01:00:09 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:09.495908 | orchestrator | 2026-03-29 01:00:09 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:09.497730 | orchestrator | 2026-03-29 01:00:09 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:09.497855 | orchestrator | 2026-03-29 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:12.536016 | orchestrator | 2026-03-29 01:00:12 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:12.538796 | orchestrator | 2026-03-29 01:00:12 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:12.539834 | orchestrator | 2026-03-29 01:00:12 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:12.539978 | orchestrator | 2026-03-29 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:15.574227 | orchestrator | 2026-03-29 01:00:15 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:15.576468 | orchestrator | 2026-03-29 01:00:15 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:15.578482 | orchestrator | 2026-03-29 01:00:15 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:15.578540 | orchestrator | 2026-03-29 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:18.612881 | orchestrator | 2026-03-29 01:00:18 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:18.618011 | orchestrator | 2026-03-29 01:00:18 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:18.618499 | orchestrator | 2026-03-29 01:00:18 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:18.618768 | orchestrator | 2026-03-29 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:21.656191 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:21.656561 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:21.657608 | orchestrator | 2026-03-29 01:00:21 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:21.657665 | orchestrator | 2026-03-29 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:24.688147 | orchestrator | 2026-03-29 01:00:24 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:24.688476 | orchestrator | 2026-03-29 01:00:24 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:24.690330 | orchestrator | 2026-03-29 01:00:24 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:24.690366 | orchestrator | 2026-03-29 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:27.731422 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:27.733366 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:27.735437 | orchestrator | 2026-03-29 01:00:27 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:27.735509 | orchestrator | 2026-03-29 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:30.765775 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:30.767657 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:30.769530 | orchestrator | 2026-03-29 01:00:30 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:30.769593 | orchestrator | 2026-03-29 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:33.806820 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:33.808179 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:33.809535 | orchestrator | 2026-03-29 01:00:33 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:33.809577 | orchestrator | 2026-03-29 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:36.850366 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:36.852180 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:36.853823 | orchestrator | 2026-03-29 01:00:36 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:36.853873 | orchestrator | 2026-03-29 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:39.897840 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:39.898908 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:39.901414 | orchestrator | 2026-03-29 01:00:39 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:39.901779 | orchestrator | 2026-03-29 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:42.941976 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:42.944666 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:42.946284 | orchestrator | 2026-03-29 01:00:42 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:42.946359 | orchestrator | 2026-03-29 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:45.990490 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:45.992235 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:45.994324 | orchestrator | 2026-03-29 01:00:45 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:45.994408 | orchestrator | 2026-03-29 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:49.037106 | orchestrator | 2026-03-29 01:00:49 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:49.038786 | orchestrator | 2026-03-29 01:00:49 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:49.040187 | orchestrator | 2026-03-29 01:00:49 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:49.040223 | orchestrator | 2026-03-29 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:52.078131 | orchestrator | 2026-03-29 01:00:52 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:52.079983 | orchestrator | 2026-03-29 01:00:52 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:52.081824 | orchestrator | 2026-03-29 01:00:52 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:52.082476 | orchestrator | 2026-03-29 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:55.132851 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:55.134190 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:55.136835 | orchestrator | 2026-03-29 01:00:55 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:55.137065 | orchestrator | 2026-03-29 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:00:58.177164 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:00:58.178946 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:00:58.180833 | orchestrator | 2026-03-29 01:00:58 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:00:58.180916 | orchestrator | 2026-03-29 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:01.223146 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:01.225312 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:01.227606 | orchestrator | 2026-03-29 01:01:01 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:01.227647 | orchestrator | 2026-03-29 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:04.265926 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:04.266279 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:04.268654 | orchestrator | 2026-03-29 01:01:04 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:04.268700 | orchestrator | 2026-03-29 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:07.308236 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:07.308520 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:07.309267 | orchestrator | 2026-03-29 01:01:07 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:07.309296 | orchestrator | 2026-03-29 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:10.348154 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:10.350294 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:10.352047 | orchestrator | 2026-03-29 01:01:10 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:10.352102 | orchestrator | 2026-03-29 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:13.390237 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:13.391213 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:13.392220 | orchestrator | 2026-03-29 01:01:13 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:13.392252 | orchestrator | 2026-03-29 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:16.434435 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:16.436133 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:16.438717 | orchestrator | 2026-03-29 01:01:16 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:16.438778 | orchestrator | 2026-03-29 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:19.477007 | orchestrator | 2026-03-29 01:01:19 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:19.478565 | orchestrator | 2026-03-29 01:01:19 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:19.480131 | orchestrator | 2026-03-29 01:01:19 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:19.480235 | orchestrator | 2026-03-29 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:22.523435 | orchestrator | 2026-03-29 01:01:22 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:22.526922 | orchestrator | 2026-03-29 01:01:22 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:22.527850 | orchestrator | 2026-03-29 01:01:22 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:22.528013 | orchestrator | 2026-03-29 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:25.574451 | orchestrator | 2026-03-29 01:01:25 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:25.574518 | orchestrator | 2026-03-29 01:01:25 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:25.575211 | orchestrator | 2026-03-29 01:01:25 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:25.575266 | orchestrator | 2026-03-29 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:28.621985 | orchestrator | 2026-03-29 01:01:28 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:28.624507 | orchestrator | 2026-03-29 01:01:28 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:28.627215 | orchestrator | 2026-03-29 01:01:28 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state STARTED 2026-03-29 01:01:28.627592 | orchestrator | 2026-03-29 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:31.668407 | orchestrator | 2026-03-29 01:01:31 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:31.669505 | orchestrator | 2026-03-29 01:01:31 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state STARTED 2026-03-29 01:01:31.672854 | orchestrator | 2026-03-29 01:01:31 | INFO  | Task 6b27921f-86a3-41d7-be5e-f9aeb660bb6f is in state SUCCESS 2026-03-29 01:01:31.674397 | orchestrator | 2026-03-29 01:01:31.674514 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 01:01:31.674531 | orchestrator | 2.16.14 2026-03-29 01:01:31.674536 | orchestrator | 2026-03-29 01:01:31.674542 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-29 01:01:31.674549 | orchestrator | 2026-03-29 01:01:31.674555 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-29 01:01:31.674562 | orchestrator | Sunday 29 March 2026 00:59:26 +0000 (0:00:00.533) 0:00:00.533 ********** 2026-03-29 01:01:31.674568 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:01:31.674575 | orchestrator | 2026-03-29 01:01:31.674597 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-29 01:01:31.674604 | orchestrator | Sunday 29 March 2026 00:59:26 +0000 (0:00:00.541) 0:00:01.074 ********** 2026-03-29 01:01:31.674771 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.674835 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.674839 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.674843 | orchestrator | 2026-03-29 01:01:31.674847 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-29 01:01:31.674851 | orchestrator | Sunday 29 March 2026 00:59:27 +0000 (0:00:00.690) 0:00:01.764 ********** 2026-03-29 01:01:31.674855 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.674859 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.674862 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.674866 | orchestrator | 2026-03-29 01:01:31.674870 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-29 01:01:31.674874 | orchestrator | Sunday 29 March 2026 00:59:27 +0000 (0:00:00.256) 0:00:02.021 ********** 2026-03-29 01:01:31.674878 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.674882 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.674885 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.674889 | orchestrator | 2026-03-29 01:01:31.674893 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-29 01:01:31.674897 | orchestrator | Sunday 29 March 2026 00:59:28 +0000 (0:00:00.876) 0:00:02.897 ********** 2026-03-29 01:01:31.674901 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.674920 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.674924 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.674930 | orchestrator | 2026-03-29 01:01:31.674936 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-29 01:01:31.674942 | orchestrator | Sunday 29 March 2026 00:59:29 +0000 (0:00:00.321) 0:00:03.218 ********** 2026-03-29 01:01:31.674947 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.674952 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.674957 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.674963 | orchestrator | 2026-03-29 01:01:31.674969 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-29 01:01:31.674975 | orchestrator | Sunday 29 March 2026 00:59:29 +0000 (0:00:00.309) 0:00:03.528 ********** 2026-03-29 01:01:31.674980 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.674985 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.674990 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.674995 | orchestrator | 2026-03-29 01:01:31.675000 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-29 01:01:31.675006 | orchestrator | Sunday 29 March 2026 00:59:29 +0000 (0:00:00.318) 0:00:03.847 ********** 2026-03-29 01:01:31.675016 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675136 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675147 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675153 | orchestrator | 2026-03-29 01:01:31.675159 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-29 01:01:31.675165 | orchestrator | Sunday 29 March 2026 00:59:30 +0000 (0:00:00.542) 0:00:04.389 ********** 2026-03-29 01:01:31.675170 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.675176 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.675181 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.675187 | orchestrator | 2026-03-29 01:01:31.675194 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-29 01:01:31.675200 | orchestrator | Sunday 29 March 2026 00:59:30 +0000 (0:00:00.318) 0:00:04.708 ********** 2026-03-29 01:01:31.675205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 01:01:31.675212 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 01:01:31.675218 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 01:01:31.675224 | orchestrator | 2026-03-29 01:01:31.675230 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-29 01:01:31.675237 | orchestrator | Sunday 29 March 2026 00:59:31 +0000 (0:00:00.683) 0:00:05.392 ********** 2026-03-29 01:01:31.675243 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.675249 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.675255 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.675261 | orchestrator | 2026-03-29 01:01:31.675267 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-29 01:01:31.675271 | orchestrator | Sunday 29 March 2026 00:59:31 +0000 (0:00:00.432) 0:00:05.825 ********** 2026-03-29 01:01:31.675275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 01:01:31.675279 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 01:01:31.675283 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 01:01:31.675287 | orchestrator | 2026-03-29 01:01:31.675290 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-29 01:01:31.675294 | orchestrator | Sunday 29 March 2026 00:59:33 +0000 (0:00:02.078) 0:00:07.904 ********** 2026-03-29 01:01:31.675345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 01:01:31.675349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 01:01:31.675353 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 01:01:31.675365 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675369 | orchestrator | 2026-03-29 01:01:31.675402 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-29 01:01:31.675407 | orchestrator | Sunday 29 March 2026 00:59:34 +0000 (0:00:00.630) 0:00:08.535 ********** 2026-03-29 01:01:31.675413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.675426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.675430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.675434 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675438 | orchestrator | 2026-03-29 01:01:31.675442 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-29 01:01:31.675445 | orchestrator | Sunday 29 March 2026 00:59:35 +0000 (0:00:00.828) 0:00:09.363 ********** 2026-03-29 01:01:31.675451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.675458 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.675462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.675466 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675470 | orchestrator | 2026-03-29 01:01:31.675474 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-29 01:01:31.675478 | orchestrator | Sunday 29 March 2026 00:59:35 +0000 (0:00:00.265) 0:00:09.629 ********** 2026-03-29 01:01:31.675483 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a5ec47bd76e4', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-29 00:59:32.355433', 'end': '2026-03-29 00:59:32.386075', 'delta': '0:00:00.030642', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a5ec47bd76e4'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-29 01:01:31.675489 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f4e4e2d790dc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-29 00:59:33.124728', 'end': '2026-03-29 00:59:33.146470', 'delta': '0:00:00.021742', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f4e4e2d790dc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-29 01:01:31.675514 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '858527e03b20', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-29 00:59:33.602890', 'end': '2026-03-29 00:59:33.641131', 'delta': '0:00:00.038241', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['858527e03b20'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-29 01:01:31.675520 | orchestrator | 2026-03-29 01:01:31.675524 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-29 01:01:31.675527 | orchestrator | Sunday 29 March 2026 00:59:35 +0000 (0:00:00.178) 0:00:09.808 ********** 2026-03-29 01:01:31.675531 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.675535 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.675539 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.675542 | orchestrator | 2026-03-29 01:01:31.675546 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-29 01:01:31.675550 | orchestrator | Sunday 29 March 2026 00:59:36 +0000 (0:00:00.443) 0:00:10.251 ********** 2026-03-29 01:01:31.675554 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-29 01:01:31.675558 | orchestrator | 2026-03-29 01:01:31.675562 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-29 01:01:31.675565 | orchestrator | Sunday 29 March 2026 00:59:37 +0000 (0:00:01.442) 0:00:11.693 ********** 2026-03-29 01:01:31.675569 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675573 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675577 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675581 | orchestrator | 2026-03-29 01:01:31.675584 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-29 01:01:31.675588 | orchestrator | Sunday 29 March 2026 00:59:37 +0000 (0:00:00.271) 0:00:11.965 ********** 2026-03-29 01:01:31.675592 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675596 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675600 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675603 | orchestrator | 2026-03-29 01:01:31.675607 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 01:01:31.675611 | orchestrator | Sunday 29 March 2026 00:59:38 +0000 (0:00:00.360) 0:00:12.326 ********** 2026-03-29 01:01:31.675615 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675618 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675622 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675626 | orchestrator | 2026-03-29 01:01:31.675630 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-29 01:01:31.675634 | orchestrator | Sunday 29 March 2026 00:59:38 +0000 (0:00:00.381) 0:00:12.708 ********** 2026-03-29 01:01:31.675639 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.675646 | orchestrator | 2026-03-29 01:01:31.675655 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-29 01:01:31.675664 | orchestrator | Sunday 29 March 2026 00:59:38 +0000 (0:00:00.118) 0:00:12.826 ********** 2026-03-29 01:01:31.675670 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675681 | orchestrator | 2026-03-29 01:01:31.675687 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-29 01:01:31.675693 | orchestrator | Sunday 29 March 2026 00:59:38 +0000 (0:00:00.207) 0:00:13.034 ********** 2026-03-29 01:01:31.675698 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675704 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675710 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675716 | orchestrator | 2026-03-29 01:01:31.675721 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-29 01:01:31.675728 | orchestrator | Sunday 29 March 2026 00:59:39 +0000 (0:00:00.253) 0:00:13.288 ********** 2026-03-29 01:01:31.675734 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675741 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675747 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675753 | orchestrator | 2026-03-29 01:01:31.675760 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-29 01:01:31.675767 | orchestrator | Sunday 29 March 2026 00:59:39 +0000 (0:00:00.324) 0:00:13.612 ********** 2026-03-29 01:01:31.675773 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675780 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675787 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675793 | orchestrator | 2026-03-29 01:01:31.675800 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-29 01:01:31.675807 | orchestrator | Sunday 29 March 2026 00:59:39 +0000 (0:00:00.470) 0:00:14.083 ********** 2026-03-29 01:01:31.675814 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675819 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675823 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675829 | orchestrator | 2026-03-29 01:01:31.675835 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-29 01:01:31.675841 | orchestrator | Sunday 29 March 2026 00:59:40 +0000 (0:00:00.283) 0:00:14.366 ********** 2026-03-29 01:01:31.675847 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675853 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675859 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675865 | orchestrator | 2026-03-29 01:01:31.675872 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-29 01:01:31.675878 | orchestrator | Sunday 29 March 2026 00:59:40 +0000 (0:00:00.294) 0:00:14.660 ********** 2026-03-29 01:01:31.675885 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675891 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675898 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675929 | orchestrator | 2026-03-29 01:01:31.675935 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-29 01:01:31.675940 | orchestrator | Sunday 29 March 2026 00:59:40 +0000 (0:00:00.305) 0:00:14.966 ********** 2026-03-29 01:01:31.675944 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.675949 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.675953 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.675958 | orchestrator | 2026-03-29 01:01:31.675962 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-29 01:01:31.675966 | orchestrator | Sunday 29 March 2026 00:59:41 +0000 (0:00:00.470) 0:00:15.436 ********** 2026-03-29 01:01:31.675976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7', 'dm-uuid-LVM-9b9wJNrZETWOFpxcna2wuDQPfWOghzez0v4d7ZugYsCTYvBdsaVZHmcJ0Y6u0VzP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.675988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8', 'dm-uuid-LVM-6qZ8Xz3PCo1t1iPHk1JSrR1oaX7zPMLsbbh5y7RBImcMbddwWlsb8BK6SH7G4D1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.675993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.675998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535', 'dm-uuid-LVM-IAp02j5g2oQ3zhw0uSFtEtUX8CGfBcpguA02yzkM0hs4bmzvbcYzPv39fZqX0dZl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069', 'dm-uuid-LVM-xyd1men8VV471cj3uej9m9aQwqp84vvGIafLrRukhWiMEyVwTXBzWbGsreYhDDeI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dz2DBe-zqa5-HAl3-4e2z-wvY0-8aLh-eT0uGT', 'scsi-0QEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551', 'scsi-SQEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nsF1OP-8KYf-Rtrg-mWx0-i8JD-uxdQ-8WncQo', 'scsi-0QEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf', 'scsi-SQEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c', 'scsi-SQEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676229 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.676234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vNrahe-Gh3f-fFop-2AfQ-EXmq-ysXK-ZDOYGr', 'scsi-0QEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c', 'scsi-SQEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gnl4if-m9ue-JNEF-UVVM-UBfY-i0OO-QeQRjB', 'scsi-0QEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d', 'scsi-SQEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53', 'scsi-SQEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292', 'dm-uuid-LVM-HmDwxas3Vt7MoPpfiLodPOIM77MdTsZVDz7gRgsdG1f2rJXPvbHyToe5zAfcWUEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676317 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.676328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef', 'dm-uuid-LVM-7XZFubPM5hWk3Oi0Q4YKj9G7POqXT9ZprgBP3A37GbVWoecRs7xdEMHzSdODNj4z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-29 01:01:31.676431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XxtEnX-eYq8-LT57-fSiD-l35o-C8D1-uuy9bN', 'scsi-0QEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41', 'scsi-SQEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0J0qio-0txj-yjdo-d34w-rvdv-XnOu-nkLd7k', 'scsi-0QEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89', 'scsi-SQEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c', 'scsi-SQEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-29 01:01:31.676466 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.676470 | orchestrator | 2026-03-29 01:01:31.676474 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-29 01:01:31.676478 | orchestrator | Sunday 29 March 2026 00:59:41 +0000 (0:00:00.561) 0:00:15.998 ********** 2026-03-29 01:01:31.676485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7', 'dm-uuid-LVM-9b9wJNrZETWOFpxcna2wuDQPfWOghzez0v4d7ZugYsCTYvBdsaVZHmcJ0Y6u0VzP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8', 'dm-uuid-LVM-6qZ8Xz3PCo1t1iPHk1JSrR1oaX7zPMLsbbh5y7RBImcMbddwWlsb8BK6SH7G4D1x'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676503 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676529 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16', 'scsi-SQEMU_QEMU_HARDDISK_318cc609-7e64-4013-b7ec-e8927e97946a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ec951f8f--e82d--5973--b083--619786b6a4a7-osd--block--ec951f8f--e82d--5973--b083--619786b6a4a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dz2DBe-zqa5-HAl3-4e2z-wvY0-8aLh-eT0uGT', 'scsi-0QEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551', 'scsi-SQEMU_QEMU_HARDDISK_f2ea4a06-d51a-493c-82b1-bac83ac89551'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fb9b884b--e3c0--524d--8e95--f889faf8bdb8-osd--block--fb9b884b--e3c0--524d--8e95--f889faf8bdb8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nsF1OP-8KYf-Rtrg-mWx0-i8JD-uxdQ-8WncQo', 'scsi-0QEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf', 'scsi-SQEMU_QEMU_HARDDISK_3634f6e0-2fc1-46dc-9b61-f009b476dcdf'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535', 'dm-uuid-LVM-IAp02j5g2oQ3zhw0uSFtEtUX8CGfBcpguA02yzkM0hs4bmzvbcYzPv39fZqX0dZl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c', 'scsi-SQEMU_QEMU_HARDDISK_7c8af2ac-6eac-493c-ba3d-ee53b4a7d40c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676582 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069', 'dm-uuid-LVM-xyd1men8VV471cj3uej9m9aQwqp84vvGIafLrRukhWiMEyVwTXBzWbGsreYhDDeI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676587 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676595 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.676599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676606 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676614 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676624 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292', 'dm-uuid-LVM-HmDwxas3Vt7MoPpfiLodPOIM77MdTsZVDz7gRgsdG1f2rJXPvbHyToe5zAfcWUEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676651 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16', 'scsi-SQEMU_QEMU_HARDDISK_399884be-143f-480f-85e8-b5f7de120e28-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef', 'dm-uuid-LVM-7XZFubPM5hWk3Oi0Q4YKj9G7POqXT9ZprgBP3A37GbVWoecRs7xdEMHzSdODNj4z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--00df2b4e--a360--5652--a277--e346f3e9f535-osd--block--00df2b4e--a360--5652--a277--e346f3e9f535'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vNrahe-Gh3f-fFop-2AfQ-EXmq-ysXK-ZDOYGr', 'scsi-0QEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c', 'scsi-SQEMU_QEMU_HARDDISK_637791c8-8ac8-49ce-9448-9b664b68bb9c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35a0cf9a--662c--5baf--94a5--8e3a66aae069-osd--block--35a0cf9a--662c--5baf--94a5--8e3a66aae069'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gnl4if-m9ue-JNEF-UVVM-UBfY-i0OO-QeQRjB', 'scsi-0QEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d', 'scsi-SQEMU_QEMU_HARDDISK_9c3edf6b-7c95-4460-bef4-a1ae8fb1460d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53', 'scsi-SQEMU_QEMU_HARDDISK_45f66f48-5092-4630-bbd4-e7a21fea6d53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676705 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676709 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.676715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ada047f-8836-45d8-9369-df8d0b6945b8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676744 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--687a2d88--e62e--55f7--9995--e7b8ae522292-osd--block--687a2d88--e62e--55f7--9995--e7b8ae522292'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XxtEnX-eYq8-LT57-fSiD-l35o-C8D1-uuy9bN', 'scsi-0QEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41', 'scsi-SQEMU_QEMU_HARDDISK_3da40c29-5f2c-4690-a312-2dad3a63ee41'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b95a2846--f14f--5a7d--ae9e--15318cf5fdef-osd--block--b95a2846--f14f--5a7d--ae9e--15318cf5fdef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0J0qio-0txj-yjdo-d34w-rvdv-XnOu-nkLd7k', 'scsi-0QEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89', 'scsi-SQEMU_QEMU_HARDDISK_eba3fb10-bf4f-42e8-8781-9c26ea140c89'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c', 'scsi-SQEMU_QEMU_HARDDISK_071e4fdb-2f21-4724-b6d5-ab202ed81b2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676761 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-29-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-29 01:01:31.676766 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.676770 | orchestrator | 2026-03-29 01:01:31.676776 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-29 01:01:31.676780 | orchestrator | Sunday 29 March 2026 00:59:42 +0000 (0:00:00.574) 0:00:16.572 ********** 2026-03-29 01:01:31.676784 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.676788 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.676792 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.676796 | orchestrator | 2026-03-29 01:01:31.676799 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-29 01:01:31.676803 | orchestrator | Sunday 29 March 2026 00:59:43 +0000 (0:00:00.697) 0:00:17.269 ********** 2026-03-29 01:01:31.676807 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.676811 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.676815 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.676818 | orchestrator | 2026-03-29 01:01:31.676822 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 01:01:31.676826 | orchestrator | Sunday 29 March 2026 00:59:43 +0000 (0:00:00.496) 0:00:17.766 ********** 2026-03-29 01:01:31.676830 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.676834 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.676838 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.676842 | orchestrator | 2026-03-29 01:01:31.676846 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 01:01:31.676850 | orchestrator | Sunday 29 March 2026 00:59:44 +0000 (0:00:00.673) 0:00:18.439 ********** 2026-03-29 01:01:31.676853 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.676857 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.676861 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.676865 | orchestrator | 2026-03-29 01:01:31.676869 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-29 01:01:31.676875 | orchestrator | Sunday 29 March 2026 00:59:44 +0000 (0:00:00.321) 0:00:18.761 ********** 2026-03-29 01:01:31.676880 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.676883 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.676887 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.676891 | orchestrator | 2026-03-29 01:01:31.676895 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-29 01:01:31.676899 | orchestrator | Sunday 29 March 2026 00:59:45 +0000 (0:00:00.412) 0:00:19.173 ********** 2026-03-29 01:01:31.676903 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.676906 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.676910 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.676914 | orchestrator | 2026-03-29 01:01:31.676918 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-29 01:01:31.676922 | orchestrator | Sunday 29 March 2026 00:59:45 +0000 (0:00:00.599) 0:00:19.773 ********** 2026-03-29 01:01:31.676925 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-29 01:01:31.676930 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-29 01:01:31.676933 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-29 01:01:31.676937 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-29 01:01:31.676941 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-29 01:01:31.676945 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-29 01:01:31.676949 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-29 01:01:31.676952 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-29 01:01:31.676956 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-29 01:01:31.676960 | orchestrator | 2026-03-29 01:01:31.676964 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-29 01:01:31.676968 | orchestrator | Sunday 29 March 2026 00:59:46 +0000 (0:00:01.003) 0:00:20.776 ********** 2026-03-29 01:01:31.676972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-29 01:01:31.676976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-29 01:01:31.676980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-29 01:01:31.676983 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.676987 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-29 01:01:31.676991 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-29 01:01:31.676994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-29 01:01:31.676998 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.677002 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-29 01:01:31.677006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-29 01:01:31.677010 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-29 01:01:31.677014 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.677018 | orchestrator | 2026-03-29 01:01:31.677022 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-29 01:01:31.677026 | orchestrator | Sunday 29 March 2026 00:59:46 +0000 (0:00:00.340) 0:00:21.117 ********** 2026-03-29 01:01:31.677030 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:01:31.677034 | orchestrator | 2026-03-29 01:01:31.677038 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-29 01:01:31.677043 | orchestrator | Sunday 29 March 2026 00:59:47 +0000 (0:00:00.715) 0:00:21.833 ********** 2026-03-29 01:01:31.677049 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.677053 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.677057 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.677061 | orchestrator | 2026-03-29 01:01:31.677065 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-29 01:01:31.677074 | orchestrator | Sunday 29 March 2026 00:59:48 +0000 (0:00:00.317) 0:00:22.150 ********** 2026-03-29 01:01:31.677078 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.677082 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.677086 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.677089 | orchestrator | 2026-03-29 01:01:31.677093 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-29 01:01:31.677100 | orchestrator | Sunday 29 March 2026 00:59:48 +0000 (0:00:00.323) 0:00:22.473 ********** 2026-03-29 01:01:31.677104 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.677108 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.677112 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:01:31.677116 | orchestrator | 2026-03-29 01:01:31.677120 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-29 01:01:31.677124 | orchestrator | Sunday 29 March 2026 00:59:48 +0000 (0:00:00.316) 0:00:22.789 ********** 2026-03-29 01:01:31.677128 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.677132 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.677135 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.677139 | orchestrator | 2026-03-29 01:01:31.677143 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-29 01:01:31.677147 | orchestrator | Sunday 29 March 2026 00:59:49 +0000 (0:00:00.546) 0:00:23.336 ********** 2026-03-29 01:01:31.677151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 01:01:31.677155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 01:01:31.677158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 01:01:31.677162 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.677166 | orchestrator | 2026-03-29 01:01:31.677170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-29 01:01:31.677174 | orchestrator | Sunday 29 March 2026 00:59:49 +0000 (0:00:00.348) 0:00:23.685 ********** 2026-03-29 01:01:31.677178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 01:01:31.677182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 01:01:31.677185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 01:01:31.677189 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.677193 | orchestrator | 2026-03-29 01:01:31.677197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-29 01:01:31.677201 | orchestrator | Sunday 29 March 2026 00:59:49 +0000 (0:00:00.350) 0:00:24.036 ********** 2026-03-29 01:01:31.677205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-29 01:01:31.677209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-29 01:01:31.677212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-29 01:01:31.677218 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.677224 | orchestrator | 2026-03-29 01:01:31.677230 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-29 01:01:31.677236 | orchestrator | Sunday 29 March 2026 00:59:50 +0000 (0:00:00.354) 0:00:24.391 ********** 2026-03-29 01:01:31.677242 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:01:31.677249 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:01:31.677256 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:01:31.677263 | orchestrator | 2026-03-29 01:01:31.677269 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-29 01:01:31.677275 | orchestrator | Sunday 29 March 2026 00:59:50 +0000 (0:00:00.288) 0:00:24.680 ********** 2026-03-29 01:01:31.677282 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-29 01:01:31.677288 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-29 01:01:31.677332 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-29 01:01:31.677342 | orchestrator | 2026-03-29 01:01:31.677348 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-29 01:01:31.677354 | orchestrator | Sunday 29 March 2026 00:59:51 +0000 (0:00:00.509) 0:00:25.190 ********** 2026-03-29 01:01:31.677366 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 01:01:31.677372 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 01:01:31.677378 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 01:01:31.677384 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 01:01:31.677390 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 01:01:31.677398 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 01:01:31.677403 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 01:01:31.677407 | orchestrator | 2026-03-29 01:01:31.677410 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-29 01:01:31.677414 | orchestrator | Sunday 29 March 2026 00:59:51 +0000 (0:00:00.869) 0:00:26.059 ********** 2026-03-29 01:01:31.677418 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-29 01:01:31.677422 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-29 01:01:31.677426 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-29 01:01:31.677429 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-29 01:01:31.677433 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-29 01:01:31.677437 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-29 01:01:31.677445 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-29 01:01:31.677448 | orchestrator | 2026-03-29 01:01:31.677452 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-29 01:01:31.677456 | orchestrator | Sunday 29 March 2026 00:59:53 +0000 (0:00:01.689) 0:00:27.749 ********** 2026-03-29 01:01:31.677460 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:01:31.677464 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:01:31.677468 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-29 01:01:31.677472 | orchestrator | 2026-03-29 01:01:31.677476 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-29 01:01:31.677483 | orchestrator | Sunday 29 March 2026 00:59:53 +0000 (0:00:00.348) 0:00:28.097 ********** 2026-03-29 01:01:31.677488 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 01:01:31.677493 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 01:01:31.677497 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 01:01:31.677502 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 01:01:31.677506 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-29 01:01:31.677513 | orchestrator | 2026-03-29 01:01:31.677516 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-29 01:01:31.677520 | orchestrator | Sunday 29 March 2026 01:00:38 +0000 (0:00:44.322) 0:01:12.420 ********** 2026-03-29 01:01:31.677524 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677528 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677532 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677535 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677539 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677543 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677547 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-29 01:01:31.677550 | orchestrator | 2026-03-29 01:01:31.677554 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-29 01:01:31.677558 | orchestrator | Sunday 29 March 2026 01:01:02 +0000 (0:00:24.060) 0:01:36.480 ********** 2026-03-29 01:01:31.677561 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677565 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677569 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677573 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677577 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677580 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677584 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-29 01:01:31.677588 | orchestrator | 2026-03-29 01:01:31.677592 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-29 01:01:31.677595 | orchestrator | Sunday 29 March 2026 01:01:14 +0000 (0:00:11.805) 0:01:48.286 ********** 2026-03-29 01:01:31.677599 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677603 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 01:01:31.677607 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 01:01:31.677610 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677614 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 01:01:31.677620 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 01:01:31.677624 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677628 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 01:01:31.677633 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 01:01:31.677640 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677649 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 01:01:31.677655 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 01:01:31.677661 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677667 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 01:01:31.677678 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 01:01:31.677684 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-29 01:01:31.677690 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-29 01:01:31.677697 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-29 01:01:31.677703 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-29 01:01:31.677709 | orchestrator | 2026-03-29 01:01:31.677716 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:01:31.677722 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-29 01:01:31.677728 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-29 01:01:31.677732 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-29 01:01:31.677736 | orchestrator | 2026-03-29 01:01:31.677740 | orchestrator | 2026-03-29 01:01:31.677744 | orchestrator | 2026-03-29 01:01:31.677747 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:01:31.677751 | orchestrator | Sunday 29 March 2026 01:01:30 +0000 (0:00:16.216) 0:02:04.502 ********** 2026-03-29 01:01:31.677755 | orchestrator | =============================================================================== 2026-03-29 01:01:31.677759 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.32s 2026-03-29 01:01:31.677763 | orchestrator | generate keys ---------------------------------------------------------- 24.06s 2026-03-29 01:01:31.677767 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.22s 2026-03-29 01:01:31.677770 | orchestrator | get keys from monitors ------------------------------------------------- 11.81s 2026-03-29 01:01:31.677774 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.08s 2026-03-29 01:01:31.677778 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.69s 2026-03-29 01:01:31.677782 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.44s 2026-03-29 01:01:31.677786 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.00s 2026-03-29 01:01:31.677790 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2026-03-29 01:01:31.677793 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-03-29 01:01:31.677798 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2026-03-29 01:01:31.677801 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2026-03-29 01:01:31.677805 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2026-03-29 01:01:31.677809 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.69s 2026-03-29 01:01:31.677813 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-03-29 01:01:31.677817 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2026-03-29 01:01:31.677820 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.63s 2026-03-29 01:01:31.677824 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.60s 2026-03-29 01:01:31.677828 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.57s 2026-03-29 01:01:31.677832 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2026-03-29 01:01:31.677836 | orchestrator | 2026-03-29 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:34.717515 | orchestrator | 2026-03-29 01:01:34 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:34.719858 | orchestrator | 2026-03-29 01:01:34 | INFO  | Task bb080561-3598-43ff-8e41-af2711d3067b is in state SUCCESS 2026-03-29 01:01:34.721501 | orchestrator | 2026-03-29 01:01:34.721545 | orchestrator | 2026-03-29 01:01:34.721551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:01:34.721556 | orchestrator | 2026-03-29 01:01:34.721560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:01:34.721564 | orchestrator | Sunday 29 March 2026 00:59:51 +0000 (0:00:00.231) 0:00:00.231 ********** 2026-03-29 01:01:34.721569 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.721574 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.721578 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.721582 | orchestrator | 2026-03-29 01:01:34.721586 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:01:34.721590 | orchestrator | Sunday 29 March 2026 00:59:51 +0000 (0:00:00.249) 0:00:00.480 ********** 2026-03-29 01:01:34.721594 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-29 01:01:34.721598 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-29 01:01:34.721602 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-29 01:01:34.721606 | orchestrator | 2026-03-29 01:01:34.721623 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-29 01:01:34.721627 | orchestrator | 2026-03-29 01:01:34.721631 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 01:01:34.721635 | orchestrator | Sunday 29 March 2026 00:59:52 +0000 (0:00:00.345) 0:00:00.825 ********** 2026-03-29 01:01:34.721639 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:01:34.721644 | orchestrator | 2026-03-29 01:01:34.721648 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-29 01:01:34.721652 | orchestrator | Sunday 29 March 2026 00:59:52 +0000 (0:00:00.467) 0:00:01.293 ********** 2026-03-29 01:01:34.721660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.721693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.721698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.721706 | orchestrator | 2026-03-29 01:01:34.721710 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-29 01:01:34.721714 | orchestrator | Sunday 29 March 2026 00:59:53 +0000 (0:00:01.250) 0:00:02.544 ********** 2026-03-29 01:01:34.721718 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.721721 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.721725 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.721729 | orchestrator | 2026-03-29 01:01:34.721733 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 01:01:34.721737 | orchestrator | Sunday 29 March 2026 00:59:54 +0000 (0:00:00.428) 0:00:02.972 ********** 2026-03-29 01:01:34.721744 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 01:01:34.721748 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 01:01:34.721752 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 01:01:34.721756 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 01:01:34.721760 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 01:01:34.721763 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 01:01:34.721767 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-29 01:01:34.721771 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 01:01:34.721778 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 01:01:34.721781 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 01:01:34.721785 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 01:01:34.721789 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 01:01:34.721793 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 01:01:34.721797 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 01:01:34.721800 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-29 01:01:34.721804 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 01:01:34.721808 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-29 01:01:34.721812 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-29 01:01:34.721815 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-29 01:01:34.721819 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-29 01:01:34.721823 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-29 01:01:34.721827 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-29 01:01:34.721830 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-29 01:01:34.721834 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-29 01:01:34.721839 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-29 01:01:34.721853 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-29 01:01:34.721857 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-29 01:01:34.721861 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-29 01:01:34.721865 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-29 01:01:34.721868 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-29 01:01:34.721872 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-29 01:01:34.721876 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-29 01:01:34.721880 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-29 01:01:34.721884 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-29 01:01:34.721888 | orchestrator | 2026-03-29 01:01:34.721892 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.721895 | orchestrator | Sunday 29 March 2026 00:59:55 +0000 (0:00:00.716) 0:00:03.689 ********** 2026-03-29 01:01:34.721899 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.721903 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.721907 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.721910 | orchestrator | 2026-03-29 01:01:34.721914 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.721918 | orchestrator | Sunday 29 March 2026 00:59:55 +0000 (0:00:00.320) 0:00:04.010 ********** 2026-03-29 01:01:34.721924 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.721929 | orchestrator | 2026-03-29 01:01:34.721933 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.721937 | orchestrator | Sunday 29 March 2026 00:59:55 +0000 (0:00:00.149) 0:00:04.159 ********** 2026-03-29 01:01:34.721941 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.721945 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.721952 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.721958 | orchestrator | 2026-03-29 01:01:34.721968 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.721975 | orchestrator | Sunday 29 March 2026 00:59:55 +0000 (0:00:00.482) 0:00:04.641 ********** 2026-03-29 01:01:34.721982 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.721988 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.721995 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722001 | orchestrator | 2026-03-29 01:01:34.722006 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722128 | orchestrator | Sunday 29 March 2026 00:59:56 +0000 (0:00:00.337) 0:00:04.979 ********** 2026-03-29 01:01:34.722142 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722149 | orchestrator | 2026-03-29 01:01:34.722156 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722163 | orchestrator | Sunday 29 March 2026 00:59:56 +0000 (0:00:00.136) 0:00:05.116 ********** 2026-03-29 01:01:34.722169 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722174 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722188 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722194 | orchestrator | 2026-03-29 01:01:34.722201 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722207 | orchestrator | Sunday 29 March 2026 00:59:56 +0000 (0:00:00.275) 0:00:05.391 ********** 2026-03-29 01:01:34.722213 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722219 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722225 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722230 | orchestrator | 2026-03-29 01:01:34.722236 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722242 | orchestrator | Sunday 29 March 2026 00:59:57 +0000 (0:00:00.380) 0:00:05.771 ********** 2026-03-29 01:01:34.722249 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722256 | orchestrator | 2026-03-29 01:01:34.722263 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722269 | orchestrator | Sunday 29 March 2026 00:59:57 +0000 (0:00:00.316) 0:00:06.088 ********** 2026-03-29 01:01:34.722276 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722282 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722289 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722328 | orchestrator | 2026-03-29 01:01:34.722341 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722347 | orchestrator | Sunday 29 March 2026 00:59:57 +0000 (0:00:00.283) 0:00:06.372 ********** 2026-03-29 01:01:34.722355 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722364 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722369 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722375 | orchestrator | 2026-03-29 01:01:34.722381 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722388 | orchestrator | Sunday 29 March 2026 00:59:58 +0000 (0:00:00.335) 0:00:06.708 ********** 2026-03-29 01:01:34.722395 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722402 | orchestrator | 2026-03-29 01:01:34.722408 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722423 | orchestrator | Sunday 29 March 2026 00:59:58 +0000 (0:00:00.173) 0:00:06.881 ********** 2026-03-29 01:01:34.722430 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722436 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722443 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722449 | orchestrator | 2026-03-29 01:01:34.722456 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722462 | orchestrator | Sunday 29 March 2026 00:59:58 +0000 (0:00:00.292) 0:00:07.173 ********** 2026-03-29 01:01:34.722467 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722471 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722475 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722479 | orchestrator | 2026-03-29 01:01:34.722483 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722486 | orchestrator | Sunday 29 March 2026 00:59:59 +0000 (0:00:00.490) 0:00:07.664 ********** 2026-03-29 01:01:34.722490 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722494 | orchestrator | 2026-03-29 01:01:34.722497 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722501 | orchestrator | Sunday 29 March 2026 00:59:59 +0000 (0:00:00.132) 0:00:07.797 ********** 2026-03-29 01:01:34.722505 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722509 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722513 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722517 | orchestrator | 2026-03-29 01:01:34.722520 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722524 | orchestrator | Sunday 29 March 2026 00:59:59 +0000 (0:00:00.297) 0:00:08.094 ********** 2026-03-29 01:01:34.722528 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722532 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722535 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722576 | orchestrator | 2026-03-29 01:01:34.722580 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722584 | orchestrator | Sunday 29 March 2026 00:59:59 +0000 (0:00:00.364) 0:00:08.459 ********** 2026-03-29 01:01:34.722589 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722595 | orchestrator | 2026-03-29 01:01:34.722605 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722612 | orchestrator | Sunday 29 March 2026 00:59:59 +0000 (0:00:00.137) 0:00:08.596 ********** 2026-03-29 01:01:34.722617 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722623 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722629 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722635 | orchestrator | 2026-03-29 01:01:34.722641 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722654 | orchestrator | Sunday 29 March 2026 01:00:00 +0000 (0:00:00.292) 0:00:08.889 ********** 2026-03-29 01:01:34.722660 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722666 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722673 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722679 | orchestrator | 2026-03-29 01:01:34.722684 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722687 | orchestrator | Sunday 29 March 2026 01:00:00 +0000 (0:00:00.559) 0:00:09.449 ********** 2026-03-29 01:01:34.722691 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722695 | orchestrator | 2026-03-29 01:01:34.722705 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722709 | orchestrator | Sunday 29 March 2026 01:00:00 +0000 (0:00:00.149) 0:00:09.598 ********** 2026-03-29 01:01:34.722713 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722717 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722720 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722724 | orchestrator | 2026-03-29 01:01:34.722733 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722737 | orchestrator | Sunday 29 March 2026 01:00:01 +0000 (0:00:00.361) 0:00:09.960 ********** 2026-03-29 01:01:34.722741 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722746 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722752 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722760 | orchestrator | 2026-03-29 01:01:34.722768 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722775 | orchestrator | Sunday 29 March 2026 01:00:01 +0000 (0:00:00.315) 0:00:10.275 ********** 2026-03-29 01:01:34.722781 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722786 | orchestrator | 2026-03-29 01:01:34.722793 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722798 | orchestrator | Sunday 29 March 2026 01:00:01 +0000 (0:00:00.146) 0:00:10.422 ********** 2026-03-29 01:01:34.722804 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722810 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722816 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722821 | orchestrator | 2026-03-29 01:01:34.722826 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722832 | orchestrator | Sunday 29 March 2026 01:00:02 +0000 (0:00:00.481) 0:00:10.903 ********** 2026-03-29 01:01:34.722838 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722845 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722851 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722857 | orchestrator | 2026-03-29 01:01:34.722863 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722869 | orchestrator | Sunday 29 March 2026 01:00:02 +0000 (0:00:00.306) 0:00:11.210 ********** 2026-03-29 01:01:34.722875 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722882 | orchestrator | 2026-03-29 01:01:34.722887 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722891 | orchestrator | Sunday 29 March 2026 01:00:02 +0000 (0:00:00.140) 0:00:11.350 ********** 2026-03-29 01:01:34.722900 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722904 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722908 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722912 | orchestrator | 2026-03-29 01:01:34.722916 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-29 01:01:34.722920 | orchestrator | Sunday 29 March 2026 01:00:03 +0000 (0:00:00.308) 0:00:11.659 ********** 2026-03-29 01:01:34.722923 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:01:34.722927 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:01:34.722931 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:01:34.722935 | orchestrator | 2026-03-29 01:01:34.722939 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-29 01:01:34.722942 | orchestrator | Sunday 29 March 2026 01:00:03 +0000 (0:00:00.329) 0:00:11.988 ********** 2026-03-29 01:01:34.722946 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722950 | orchestrator | 2026-03-29 01:01:34.722954 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-29 01:01:34.722958 | orchestrator | Sunday 29 March 2026 01:00:03 +0000 (0:00:00.127) 0:00:12.116 ********** 2026-03-29 01:01:34.722961 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.722965 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.722969 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.722973 | orchestrator | 2026-03-29 01:01:34.722977 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-29 01:01:34.722981 | orchestrator | Sunday 29 March 2026 01:00:03 +0000 (0:00:00.501) 0:00:12.617 ********** 2026-03-29 01:01:34.722985 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:01:34.722989 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:01:34.722992 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:01:34.722996 | orchestrator | 2026-03-29 01:01:34.723000 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-29 01:01:34.723004 | orchestrator | Sunday 29 March 2026 01:00:05 +0000 (0:00:01.610) 0:00:14.228 ********** 2026-03-29 01:01:34.723008 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 01:01:34.723013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 01:01:34.723016 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-29 01:01:34.723020 | orchestrator | 2026-03-29 01:01:34.723024 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-29 01:01:34.723028 | orchestrator | Sunday 29 March 2026 01:00:07 +0000 (0:00:01.941) 0:00:16.169 ********** 2026-03-29 01:01:34.723032 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 01:01:34.723036 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 01:01:34.723040 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-29 01:01:34.723044 | orchestrator | 2026-03-29 01:01:34.723052 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-29 01:01:34.723056 | orchestrator | Sunday 29 March 2026 01:00:09 +0000 (0:00:02.454) 0:00:18.624 ********** 2026-03-29 01:01:34.723060 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 01:01:34.723064 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 01:01:34.723068 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-29 01:01:34.723072 | orchestrator | 2026-03-29 01:01:34.723075 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-29 01:01:34.723079 | orchestrator | Sunday 29 March 2026 01:00:12 +0000 (0:00:02.106) 0:00:20.730 ********** 2026-03-29 01:01:34.723086 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.723090 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.723097 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.723101 | orchestrator | 2026-03-29 01:01:34.723105 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-29 01:01:34.723109 | orchestrator | Sunday 29 March 2026 01:00:12 +0000 (0:00:00.324) 0:00:21.055 ********** 2026-03-29 01:01:34.723113 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.723117 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.723120 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.723124 | orchestrator | 2026-03-29 01:01:34.723128 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 01:01:34.723132 | orchestrator | Sunday 29 March 2026 01:00:12 +0000 (0:00:00.284) 0:00:21.339 ********** 2026-03-29 01:01:34.723135 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:01:34.723139 | orchestrator | 2026-03-29 01:01:34.723143 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-29 01:01:34.723147 | orchestrator | Sunday 29 March 2026 01:00:13 +0000 (0:00:00.848) 0:00:22.188 ********** 2026-03-29 01:01:34.723153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.723170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.723181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.723187 | orchestrator | 2026-03-29 01:01:34.723192 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-29 01:01:34.723197 | orchestrator | Sunday 29 March 2026 01:00:15 +0000 (0:00:01.516) 0:00:23.705 ********** 2026-03-29 01:01:34.723217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 01:01:34.723228 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.723239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 01:01:34.723250 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.723259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 01:01:34.723266 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.723272 | orchestrator | 2026-03-29 01:01:34.723279 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-29 01:01:34.723285 | orchestrator | Sunday 29 March 2026 01:00:15 +0000 (0:00:00.640) 0:00:24.346 ********** 2026-03-29 01:01:34.723366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 01:01:34.723384 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.723394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 01:01:34.723400 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.723417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-29 01:01:34.723430 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.723436 | orchestrator | 2026-03-29 01:01:34.723443 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-29 01:01:34.723449 | orchestrator | Sunday 29 March 2026 01:00:16 +0000 (0:00:00.811) 0:00:25.157 ********** 2026-03-29 01:01:34.723455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.723470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.723478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-29 01:01:34.723485 | orchestrator | 2026-03-29 01:01:34.723489 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 01:01:34.723493 | orchestrator | Sunday 29 March 2026 01:00:18 +0000 (0:00:01.557) 0:00:26.715 ********** 2026-03-29 01:01:34.723497 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:01:34.723501 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:01:34.723505 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:01:34.723508 | orchestrator | 2026-03-29 01:01:34.723512 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-29 01:01:34.723519 | orchestrator | Sunday 29 March 2026 01:00:18 +0000 (0:00:00.324) 0:00:27.039 ********** 2026-03-29 01:01:34.723523 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:01:34.723527 | orchestrator | 2026-03-29 01:01:34.723531 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-29 01:01:34.723534 | orchestrator | Sunday 29 March 2026 01:00:18 +0000 (0:00:00.522) 0:00:27.561 ********** 2026-03-29 01:01:34.723538 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:01:34.723542 | orchestrator | 2026-03-29 01:01:34.723546 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-29 01:01:34.723550 | orchestrator | Sunday 29 March 2026 01:00:21 +0000 (0:00:02.768) 0:00:30.330 ********** 2026-03-29 01:01:34.723553 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:01:34.723557 | orchestrator | 2026-03-29 01:01:34.723561 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-29 01:01:34.723565 | orchestrator | Sunday 29 March 2026 01:00:24 +0000 (0:00:02.776) 0:00:33.106 ********** 2026-03-29 01:01:34.723571 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:01:34.723575 | orchestrator | 2026-03-29 01:01:34.723579 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 01:01:34.723583 | orchestrator | Sunday 29 March 2026 01:00:40 +0000 (0:00:16.106) 0:00:49.212 ********** 2026-03-29 01:01:34.723587 | orchestrator | 2026-03-29 01:01:34.723591 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 01:01:34.723595 | orchestrator | Sunday 29 March 2026 01:00:40 +0000 (0:00:00.091) 0:00:49.303 ********** 2026-03-29 01:01:34.723598 | orchestrator | 2026-03-29 01:01:34.723602 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-29 01:01:34.723606 | orchestrator | Sunday 29 March 2026 01:00:40 +0000 (0:00:00.061) 0:00:49.364 ********** 2026-03-29 01:01:34.723610 | orchestrator | 2026-03-29 01:01:34.723614 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-29 01:01:34.723618 | orchestrator | Sunday 29 March 2026 01:00:40 +0000 (0:00:00.062) 0:00:49.427 ********** 2026-03-29 01:01:34.723622 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:01:34.723626 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:01:34.723629 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:01:34.723633 | orchestrator | 2026-03-29 01:01:34.723639 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:01:34.723648 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-29 01:01:34.723657 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-29 01:01:34.723664 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-29 01:01:34.723669 | orchestrator | 2026-03-29 01:01:34.723675 | orchestrator | 2026-03-29 01:01:34.723682 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:01:34.723693 | orchestrator | Sunday 29 March 2026 01:01:32 +0000 (0:00:51.489) 0:01:40.917 ********** 2026-03-29 01:01:34.723699 | orchestrator | =============================================================================== 2026-03-29 01:01:34.723704 | orchestrator | horizon : Restart horizon container ------------------------------------ 51.49s 2026-03-29 01:01:34.723711 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.11s 2026-03-29 01:01:34.723718 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.78s 2026-03-29 01:01:34.723723 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.77s 2026-03-29 01:01:34.723729 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.45s 2026-03-29 01:01:34.723734 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.11s 2026-03-29 01:01:34.723740 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.94s 2026-03-29 01:01:34.723747 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.61s 2026-03-29 01:01:34.723753 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.56s 2026-03-29 01:01:34.723759 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.52s 2026-03-29 01:01:34.723765 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.25s 2026-03-29 01:01:34.723772 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.85s 2026-03-29 01:01:34.723778 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2026-03-29 01:01:34.723782 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2026-03-29 01:01:34.723785 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2026-03-29 01:01:34.723789 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-03-29 01:01:34.723793 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-03-29 01:01:34.723797 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2026-03-29 01:01:34.723801 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2026-03-29 01:01:34.723805 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2026-03-29 01:01:34.723808 | orchestrator | 2026-03-29 01:01:34 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:34.723816 | orchestrator | 2026-03-29 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:37.761370 | orchestrator | 2026-03-29 01:01:37 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:37.762681 | orchestrator | 2026-03-29 01:01:37 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:37.762724 | orchestrator | 2026-03-29 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:40.806447 | orchestrator | 2026-03-29 01:01:40 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:40.807843 | orchestrator | 2026-03-29 01:01:40 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:40.807897 | orchestrator | 2026-03-29 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:43.849621 | orchestrator | 2026-03-29 01:01:43 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:43.850259 | orchestrator | 2026-03-29 01:01:43 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:43.850332 | orchestrator | 2026-03-29 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:46.897136 | orchestrator | 2026-03-29 01:01:46 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:46.899347 | orchestrator | 2026-03-29 01:01:46 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:46.899438 | orchestrator | 2026-03-29 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:49.934399 | orchestrator | 2026-03-29 01:01:49 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:49.936129 | orchestrator | 2026-03-29 01:01:49 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:49.936182 | orchestrator | 2026-03-29 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:52.971772 | orchestrator | 2026-03-29 01:01:52 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:52.974558 | orchestrator | 2026-03-29 01:01:52 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:52.974634 | orchestrator | 2026-03-29 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:56.017435 | orchestrator | 2026-03-29 01:01:56 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:56.017517 | orchestrator | 2026-03-29 01:01:56 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:56.017528 | orchestrator | 2026-03-29 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:01:59.062380 | orchestrator | 2026-03-29 01:01:59 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:01:59.063535 | orchestrator | 2026-03-29 01:01:59 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:01:59.063689 | orchestrator | 2026-03-29 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:02.111644 | orchestrator | 2026-03-29 01:02:02 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:02.113381 | orchestrator | 2026-03-29 01:02:02 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:02:02.114125 | orchestrator | 2026-03-29 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:05.146501 | orchestrator | 2026-03-29 01:02:05 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:05.147926 | orchestrator | 2026-03-29 01:02:05 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state STARTED 2026-03-29 01:02:05.148480 | orchestrator | 2026-03-29 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:08.184694 | orchestrator | 2026-03-29 01:02:08 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:08.186340 | orchestrator | 2026-03-29 01:02:08 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:08.187539 | orchestrator | 2026-03-29 01:02:08 | INFO  | Task 2fd08095-62b7-4b9e-a2e5-d2e26c4f5b6d is in state SUCCESS 2026-03-29 01:02:08.187591 | orchestrator | 2026-03-29 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:11.234572 | orchestrator | 2026-03-29 01:02:11 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:11.234720 | orchestrator | 2026-03-29 01:02:11 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:11.234731 | orchestrator | 2026-03-29 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:14.284493 | orchestrator | 2026-03-29 01:02:14 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:14.286343 | orchestrator | 2026-03-29 01:02:14 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:14.286424 | orchestrator | 2026-03-29 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:17.314104 | orchestrator | 2026-03-29 01:02:17 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:17.315451 | orchestrator | 2026-03-29 01:02:17 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:17.315538 | orchestrator | 2026-03-29 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:20.354516 | orchestrator | 2026-03-29 01:02:20 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:20.356187 | orchestrator | 2026-03-29 01:02:20 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:20.356328 | orchestrator | 2026-03-29 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:23.393337 | orchestrator | 2026-03-29 01:02:23 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:23.394339 | orchestrator | 2026-03-29 01:02:23 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:23.394575 | orchestrator | 2026-03-29 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:26.439594 | orchestrator | 2026-03-29 01:02:26 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:26.442919 | orchestrator | 2026-03-29 01:02:26 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:26.443019 | orchestrator | 2026-03-29 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:29.484129 | orchestrator | 2026-03-29 01:02:29 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state STARTED 2026-03-29 01:02:29.486345 | orchestrator | 2026-03-29 01:02:29 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:29.486797 | orchestrator | 2026-03-29 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:32.514748 | orchestrator | 2026-03-29 01:02:32 | INFO  | Task e8ace2ef-e2ca-4619-a0f8-271d3b1ab7ae is in state SUCCESS 2026-03-29 01:02:32.515775 | orchestrator | 2026-03-29 01:02:32.515844 | orchestrator | 2026-03-29 01:02:32.515854 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-29 01:02:32.515862 | orchestrator | 2026-03-29 01:02:32.515868 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-29 01:02:32.515976 | orchestrator | Sunday 29 March 2026 01:01:34 +0000 (0:00:00.150) 0:00:00.150 ********** 2026-03-29 01:02:32.515987 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-29 01:02:32.515995 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516002 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516009 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:02:32.516016 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516023 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-29 01:02:32.516029 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-29 01:02:32.516035 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-29 01:02:32.516041 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-29 01:02:32.516048 | orchestrator | 2026-03-29 01:02:32.516054 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-29 01:02:32.516479 | orchestrator | Sunday 29 March 2026 01:01:39 +0000 (0:00:04.739) 0:00:04.889 ********** 2026-03-29 01:02:32.516492 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-29 01:02:32.516496 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516501 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516505 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:02:32.516509 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516513 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-29 01:02:32.516517 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-29 01:02:32.516521 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-29 01:02:32.516525 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-29 01:02:32.516529 | orchestrator | 2026-03-29 01:02:32.516533 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-29 01:02:32.516537 | orchestrator | Sunday 29 March 2026 01:01:43 +0000 (0:00:04.564) 0:00:09.454 ********** 2026-03-29 01:02:32.516541 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-29 01:02:32.516545 | orchestrator | 2026-03-29 01:02:32.516549 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-29 01:02:32.516554 | orchestrator | Sunday 29 March 2026 01:01:44 +0000 (0:00:00.900) 0:00:10.354 ********** 2026-03-29 01:02:32.516558 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-29 01:02:32.516562 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516566 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516570 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:02:32.516574 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516578 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-29 01:02:32.516582 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-29 01:02:32.516586 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-29 01:02:32.516590 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-29 01:02:32.516594 | orchestrator | 2026-03-29 01:02:32.516598 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-29 01:02:32.516602 | orchestrator | Sunday 29 March 2026 01:01:56 +0000 (0:00:11.852) 0:00:22.207 ********** 2026-03-29 01:02:32.516606 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-29 01:02:32.516610 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-29 01:02:32.516614 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-29 01:02:32.516618 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-29 01:02:32.516650 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-29 01:02:32.516656 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-29 01:02:32.516666 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-29 01:02:32.516670 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-29 01:02:32.516674 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-29 01:02:32.516678 | orchestrator | 2026-03-29 01:02:32.516682 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-29 01:02:32.516686 | orchestrator | Sunday 29 March 2026 01:01:59 +0000 (0:00:02.736) 0:00:24.943 ********** 2026-03-29 01:02:32.516691 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-29 01:02:32.516696 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516703 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516710 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:02:32.516716 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-29 01:02:32.516723 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-29 01:02:32.516730 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-29 01:02:32.516734 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-29 01:02:32.516738 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-29 01:02:32.516742 | orchestrator | 2026-03-29 01:02:32.516746 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:02:32.516750 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:02:32.516756 | orchestrator | 2026-03-29 01:02:32.516759 | orchestrator | 2026-03-29 01:02:32.516763 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:02:32.516767 | orchestrator | Sunday 29 March 2026 01:02:05 +0000 (0:00:06.352) 0:00:31.296 ********** 2026-03-29 01:02:32.516771 | orchestrator | =============================================================================== 2026-03-29 01:02:32.516775 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.85s 2026-03-29 01:02:32.516779 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.35s 2026-03-29 01:02:32.516783 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.74s 2026-03-29 01:02:32.516787 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.56s 2026-03-29 01:02:32.516790 | orchestrator | Check if target directories exist --------------------------------------- 2.74s 2026-03-29 01:02:32.516794 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2026-03-29 01:02:32.516798 | orchestrator | 2026-03-29 01:02:32.516802 | orchestrator | 2026-03-29 01:02:32.516806 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:02:32.516809 | orchestrator | 2026-03-29 01:02:32.516845 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:02:32.516850 | orchestrator | Sunday 29 March 2026 00:59:51 +0000 (0:00:00.231) 0:00:00.232 ********** 2026-03-29 01:02:32.516854 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:02:32.516858 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:02:32.516865 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:02:32.516869 | orchestrator | 2026-03-29 01:02:32.516873 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:02:32.516877 | orchestrator | Sunday 29 March 2026 00:59:51 +0000 (0:00:00.253) 0:00:00.485 ********** 2026-03-29 01:02:32.516881 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-29 01:02:32.516885 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-29 01:02:32.516889 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-29 01:02:32.516892 | orchestrator | 2026-03-29 01:02:32.516901 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-29 01:02:32.516905 | orchestrator | 2026-03-29 01:02:32.516908 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:02:32.516912 | orchestrator | Sunday 29 March 2026 00:59:52 +0000 (0:00:00.360) 0:00:00.845 ********** 2026-03-29 01:02:32.516917 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:02:32.516920 | orchestrator | 2026-03-29 01:02:32.516924 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-29 01:02:32.516928 | orchestrator | Sunday 29 March 2026 00:59:52 +0000 (0:00:00.494) 0:00:01.340 ********** 2026-03-29 01:02:32.516957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.516966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.516973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.516983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.516994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517053 | orchestrator | 2026-03-29 01:02:32.517060 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-29 01:02:32.517069 | orchestrator | Sunday 29 March 2026 00:59:54 +0000 (0:00:01.819) 0:00:03.160 ********** 2026-03-29 01:02:32.517074 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517083 | orchestrator | 2026-03-29 01:02:32.517087 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-29 01:02:32.517092 | orchestrator | Sunday 29 March 2026 00:59:54 +0000 (0:00:00.134) 0:00:03.294 ********** 2026-03-29 01:02:32.517096 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517104 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.517109 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.517113 | orchestrator | 2026-03-29 01:02:32.517117 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-29 01:02:32.517122 | orchestrator | Sunday 29 March 2026 00:59:55 +0000 (0:00:00.430) 0:00:03.725 ********** 2026-03-29 01:02:32.517126 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:02:32.517131 | orchestrator | 2026-03-29 01:02:32.517136 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:02:32.517140 | orchestrator | Sunday 29 March 2026 00:59:55 +0000 (0:00:00.801) 0:00:04.526 ********** 2026-03-29 01:02:32.517145 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:02:32.517149 | orchestrator | 2026-03-29 01:02:32.517154 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-29 01:02:32.517158 | orchestrator | Sunday 29 March 2026 00:59:56 +0000 (0:00:00.546) 0:00:05.073 ********** 2026-03-29 01:02:32.517169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517311 | orchestrator | 2026-03-29 01:02:32.517318 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-29 01:02:32.517324 | orchestrator | Sunday 29 March 2026 00:59:59 +0000 (0:00:03.278) 0:00:08.351 ********** 2026-03-29 01:02:32.517335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517364 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517401 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.517407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517432 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.517439 | orchestrator | 2026-03-29 01:02:32.517446 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-29 01:02:32.517451 | orchestrator | Sunday 29 March 2026 01:00:00 +0000 (0:00:00.641) 0:00:08.993 ********** 2026-03-29 01:02:32.517457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517483 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517518 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.517528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517559 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.517565 | orchestrator | 2026-03-29 01:02:32.517578 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-29 01:02:32.517588 | orchestrator | Sunday 29 March 2026 01:00:01 +0000 (0:00:00.795) 0:00:09.789 ********** 2026-03-29 01:02:32.517594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517674 | orchestrator | 2026-03-29 01:02:32.517680 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-29 01:02:32.517685 | orchestrator | Sunday 29 March 2026 01:00:04 +0000 (0:00:03.403) 0:00:13.192 ********** 2026-03-29 01:02:32.517692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.517735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.517768 | orchestrator | 2026-03-29 01:02:32.517774 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-29 01:02:32.517780 | orchestrator | Sunday 29 March 2026 01:00:10 +0000 (0:00:05.674) 0:00:18.866 ********** 2026-03-29 01:02:32.517785 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.517791 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:02:32.517796 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:02:32.517802 | orchestrator | 2026-03-29 01:02:32.517808 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-29 01:02:32.517813 | orchestrator | Sunday 29 March 2026 01:00:11 +0000 (0:00:01.705) 0:00:20.572 ********** 2026-03-29 01:02:32.517819 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517824 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.517829 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.517835 | orchestrator | 2026-03-29 01:02:32.517840 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-29 01:02:32.517847 | orchestrator | Sunday 29 March 2026 01:00:12 +0000 (0:00:00.596) 0:00:21.169 ********** 2026-03-29 01:02:32.517853 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517858 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.517864 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.517870 | orchestrator | 2026-03-29 01:02:32.517875 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-29 01:02:32.517889 | orchestrator | Sunday 29 March 2026 01:00:12 +0000 (0:00:00.298) 0:00:21.468 ********** 2026-03-29 01:02:32.517895 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517900 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.517907 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.517912 | orchestrator | 2026-03-29 01:02:32.517918 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-29 01:02:32.517924 | orchestrator | Sunday 29 March 2026 01:00:13 +0000 (0:00:00.484) 0:00:21.952 ********** 2026-03-29 01:02:32.517929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517963 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.517970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.517977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.517982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.517990 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.517999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-29 01:02:32.518004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-29 01:02:32.518008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-29 01:02:32.518067 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.518074 | orchestrator | 2026-03-29 01:02:32.518078 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:02:32.518082 | orchestrator | Sunday 29 March 2026 01:00:13 +0000 (0:00:00.623) 0:00:22.576 ********** 2026-03-29 01:02:32.518085 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518089 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.518093 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.518097 | orchestrator | 2026-03-29 01:02:32.518101 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-29 01:02:32.518105 | orchestrator | Sunday 29 March 2026 01:00:14 +0000 (0:00:00.323) 0:00:22.900 ********** 2026-03-29 01:02:32.518109 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 01:02:32.518113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 01:02:32.518117 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-29 01:02:32.518121 | orchestrator | 2026-03-29 01:02:32.518125 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-29 01:02:32.518129 | orchestrator | Sunday 29 March 2026 01:00:15 +0000 (0:00:01.521) 0:00:24.421 ********** 2026-03-29 01:02:32.518140 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:02:32.518146 | orchestrator | 2026-03-29 01:02:32.518153 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-29 01:02:32.518158 | orchestrator | Sunday 29 March 2026 01:00:16 +0000 (0:00:01.232) 0:00:25.654 ********** 2026-03-29 01:02:32.518164 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518170 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.518175 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.518181 | orchestrator | 2026-03-29 01:02:32.518187 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-29 01:02:32.518193 | orchestrator | Sunday 29 March 2026 01:00:17 +0000 (0:00:00.806) 0:00:26.460 ********** 2026-03-29 01:02:32.518218 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:02:32.518225 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 01:02:32.518232 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 01:02:32.518238 | orchestrator | 2026-03-29 01:02:32.518243 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-29 01:02:32.518251 | orchestrator | Sunday 29 March 2026 01:00:18 +0000 (0:00:01.060) 0:00:27.521 ********** 2026-03-29 01:02:32.518257 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:02:32.518263 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:02:32.518270 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:02:32.518276 | orchestrator | 2026-03-29 01:02:32.518284 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-29 01:02:32.518288 | orchestrator | Sunday 29 March 2026 01:00:19 +0000 (0:00:00.310) 0:00:27.832 ********** 2026-03-29 01:02:32.518291 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 01:02:32.518295 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 01:02:32.518299 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-29 01:02:32.518303 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 01:02:32.518316 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 01:02:32.518320 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-29 01:02:32.518324 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 01:02:32.518328 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 01:02:32.518332 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-29 01:02:32.518336 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 01:02:32.518340 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 01:02:32.518343 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-29 01:02:32.518347 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 01:02:32.518351 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 01:02:32.518354 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-29 01:02:32.518358 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:02:32.518362 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:02:32.518366 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:02:32.518369 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:02:32.518377 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:02:32.518381 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:02:32.518385 | orchestrator | 2026-03-29 01:02:32.518389 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-29 01:02:32.518392 | orchestrator | Sunday 29 March 2026 01:00:28 +0000 (0:00:09.025) 0:00:36.857 ********** 2026-03-29 01:02:32.518396 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:02:32.518400 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:02:32.518404 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:02:32.518408 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:02:32.518411 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:02:32.518415 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:02:32.518419 | orchestrator | 2026-03-29 01:02:32.518423 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-29 01:02:32.518427 | orchestrator | Sunday 29 March 2026 01:00:31 +0000 (0:00:02.951) 0:00:39.808 ********** 2026-03-29 01:02:32.518435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.518443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.518448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-29 01:02:32.518456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.518462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.518467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-29 01:02:32.518471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.518478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.518482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-29 01:02:32.518490 | orchestrator | 2026-03-29 01:02:32.518494 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:02:32.518497 | orchestrator | Sunday 29 March 2026 01:00:33 +0000 (0:00:01.988) 0:00:41.796 ********** 2026-03-29 01:02:32.518501 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518505 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.518509 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.518513 | orchestrator | 2026-03-29 01:02:32.518516 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-29 01:02:32.518521 | orchestrator | Sunday 29 March 2026 01:00:33 +0000 (0:00:00.264) 0:00:42.061 ********** 2026-03-29 01:02:32.518524 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518528 | orchestrator | 2026-03-29 01:02:32.518532 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-29 01:02:32.518536 | orchestrator | Sunday 29 March 2026 01:00:35 +0000 (0:00:02.226) 0:00:44.287 ********** 2026-03-29 01:02:32.518540 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518544 | orchestrator | 2026-03-29 01:02:32.518548 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-29 01:02:32.518552 | orchestrator | Sunday 29 March 2026 01:00:38 +0000 (0:00:02.421) 0:00:46.708 ********** 2026-03-29 01:02:32.518556 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:02:32.518559 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:02:32.518563 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:02:32.518567 | orchestrator | 2026-03-29 01:02:32.518571 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-29 01:02:32.518574 | orchestrator | Sunday 29 March 2026 01:00:38 +0000 (0:00:00.981) 0:00:47.690 ********** 2026-03-29 01:02:32.518578 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:02:32.518582 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:02:32.518586 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:02:32.518589 | orchestrator | 2026-03-29 01:02:32.518593 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-29 01:02:32.518597 | orchestrator | Sunday 29 March 2026 01:00:39 +0000 (0:00:00.276) 0:00:47.966 ********** 2026-03-29 01:02:32.518601 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518608 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.518612 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.518615 | orchestrator | 2026-03-29 01:02:32.518619 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-29 01:02:32.518623 | orchestrator | Sunday 29 March 2026 01:00:39 +0000 (0:00:00.287) 0:00:48.253 ********** 2026-03-29 01:02:32.518627 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518631 | orchestrator | 2026-03-29 01:02:32.518634 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-29 01:02:32.518638 | orchestrator | Sunday 29 March 2026 01:00:53 +0000 (0:00:14.306) 0:01:02.560 ********** 2026-03-29 01:02:32.518642 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518646 | orchestrator | 2026-03-29 01:02:32.518650 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 01:02:32.518654 | orchestrator | Sunday 29 March 2026 01:01:06 +0000 (0:00:12.476) 0:01:15.036 ********** 2026-03-29 01:02:32.518657 | orchestrator | 2026-03-29 01:02:32.518662 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 01:02:32.518665 | orchestrator | Sunday 29 March 2026 01:01:06 +0000 (0:00:00.060) 0:01:15.097 ********** 2026-03-29 01:02:32.518669 | orchestrator | 2026-03-29 01:02:32.518673 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-29 01:02:32.518681 | orchestrator | Sunday 29 March 2026 01:01:06 +0000 (0:00:00.061) 0:01:15.158 ********** 2026-03-29 01:02:32.518685 | orchestrator | 2026-03-29 01:02:32.518689 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-29 01:02:32.518692 | orchestrator | Sunday 29 March 2026 01:01:06 +0000 (0:00:00.062) 0:01:15.220 ********** 2026-03-29 01:02:32.518696 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518700 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:02:32.518704 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:02:32.518708 | orchestrator | 2026-03-29 01:02:32.518712 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-29 01:02:32.518715 | orchestrator | Sunday 29 March 2026 01:01:20 +0000 (0:00:14.352) 0:01:29.572 ********** 2026-03-29 01:02:32.518719 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518723 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:02:32.518727 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:02:32.518731 | orchestrator | 2026-03-29 01:02:32.518738 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-29 01:02:32.518742 | orchestrator | Sunday 29 March 2026 01:01:25 +0000 (0:00:04.178) 0:01:33.751 ********** 2026-03-29 01:02:32.518746 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518749 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:02:32.518753 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:02:32.518757 | orchestrator | 2026-03-29 01:02:32.518761 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:02:32.518765 | orchestrator | Sunday 29 March 2026 01:01:36 +0000 (0:00:11.145) 0:01:44.896 ********** 2026-03-29 01:02:32.518768 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:02:32.518772 | orchestrator | 2026-03-29 01:02:32.518776 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-29 01:02:32.518780 | orchestrator | Sunday 29 March 2026 01:01:36 +0000 (0:00:00.619) 0:01:45.515 ********** 2026-03-29 01:02:32.518784 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:02:32.518788 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:02:32.518792 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:02:32.518795 | orchestrator | 2026-03-29 01:02:32.518799 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-29 01:02:32.518803 | orchestrator | Sunday 29 March 2026 01:01:37 +0000 (0:00:00.731) 0:01:46.247 ********** 2026-03-29 01:02:32.518807 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:02:32.518811 | orchestrator | 2026-03-29 01:02:32.518815 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-29 01:02:32.518819 | orchestrator | Sunday 29 March 2026 01:01:39 +0000 (0:00:01.656) 0:01:47.903 ********** 2026-03-29 01:02:32.518823 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-29 01:02:32.518827 | orchestrator | 2026-03-29 01:02:32.518831 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-29 01:02:32.518835 | orchestrator | Sunday 29 March 2026 01:01:52 +0000 (0:00:13.489) 0:02:01.393 ********** 2026-03-29 01:02:32.518839 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-29 01:02:32.518842 | orchestrator | 2026-03-29 01:02:32.518846 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-29 01:02:32.518850 | orchestrator | Sunday 29 March 2026 01:02:19 +0000 (0:00:26.402) 0:02:27.795 ********** 2026-03-29 01:02:32.518854 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-29 01:02:32.518859 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-29 01:02:32.518865 | orchestrator | 2026-03-29 01:02:32.518871 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-29 01:02:32.518877 | orchestrator | Sunday 29 March 2026 01:02:27 +0000 (0:00:07.947) 0:02:35.743 ********** 2026-03-29 01:02:32.518894 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518902 | orchestrator | 2026-03-29 01:02:32.518909 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-29 01:02:32.518915 | orchestrator | Sunday 29 March 2026 01:02:27 +0000 (0:00:00.113) 0:02:35.857 ********** 2026-03-29 01:02:32.518920 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518926 | orchestrator | 2026-03-29 01:02:32.518933 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-29 01:02:32.518939 | orchestrator | Sunday 29 March 2026 01:02:27 +0000 (0:00:00.110) 0:02:35.967 ********** 2026-03-29 01:02:32.518944 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518951 | orchestrator | 2026-03-29 01:02:32.518956 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-29 01:02:32.518967 | orchestrator | Sunday 29 March 2026 01:02:27 +0000 (0:00:00.101) 0:02:36.069 ********** 2026-03-29 01:02:32.518973 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.518979 | orchestrator | 2026-03-29 01:02:32.518985 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-29 01:02:32.518990 | orchestrator | Sunday 29 March 2026 01:02:27 +0000 (0:00:00.386) 0:02:36.456 ********** 2026-03-29 01:02:32.518996 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:02:32.519002 | orchestrator | 2026-03-29 01:02:32.519009 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-29 01:02:32.519014 | orchestrator | Sunday 29 March 2026 01:02:31 +0000 (0:00:03.567) 0:02:40.023 ********** 2026-03-29 01:02:32.519020 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:02:32.519026 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:02:32.519033 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:02:32.519039 | orchestrator | 2026-03-29 01:02:32.519045 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:02:32.519052 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 01:02:32.519060 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:02:32.519066 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:02:32.519070 | orchestrator | 2026-03-29 01:02:32.519074 | orchestrator | 2026-03-29 01:02:32.519078 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:02:32.519082 | orchestrator | Sunday 29 March 2026 01:02:31 +0000 (0:00:00.396) 0:02:40.419 ********** 2026-03-29 01:02:32.519086 | orchestrator | =============================================================================== 2026-03-29 01:02:32.519090 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.40s 2026-03-29 01:02:32.519094 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.35s 2026-03-29 01:02:32.519103 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.31s 2026-03-29 01:02:32.519107 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.49s 2026-03-29 01:02:32.519111 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 12.48s 2026-03-29 01:02:32.519115 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.15s 2026-03-29 01:02:32.519119 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.03s 2026-03-29 01:02:32.519123 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.95s 2026-03-29 01:02:32.519127 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.67s 2026-03-29 01:02:32.519131 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.18s 2026-03-29 01:02:32.519135 | orchestrator | keystone : Creating default user role ----------------------------------- 3.57s 2026-03-29 01:02:32.519146 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.40s 2026-03-29 01:02:32.519150 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.28s 2026-03-29 01:02:32.519154 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.95s 2026-03-29 01:02:32.519157 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.42s 2026-03-29 01:02:32.519161 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.23s 2026-03-29 01:02:32.519165 | orchestrator | keystone : Check keystone containers ------------------------------------ 1.99s 2026-03-29 01:02:32.519169 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.82s 2026-03-29 01:02:32.519173 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.71s 2026-03-29 01:02:32.519176 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.66s 2026-03-29 01:02:32.519180 | orchestrator | 2026-03-29 01:02:32 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:32.519184 | orchestrator | 2026-03-29 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:35.546990 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:35.547074 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:35.547982 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:35.548861 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:35.549590 | orchestrator | 2026-03-29 01:02:35 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:35.549620 | orchestrator | 2026-03-29 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:38.583573 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:38.584921 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:38.585707 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:38.587890 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:38.588765 | orchestrator | 2026-03-29 01:02:38 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:38.588827 | orchestrator | 2026-03-29 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:41.626421 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:41.628278 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:41.631405 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:41.634267 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:41.636228 | orchestrator | 2026-03-29 01:02:41 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:41.636254 | orchestrator | 2026-03-29 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:44.681485 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:44.685385 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:44.687000 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:44.689554 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:44.691276 | orchestrator | 2026-03-29 01:02:44 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:44.691339 | orchestrator | 2026-03-29 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:47.734069 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:47.735408 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:47.737269 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:47.739549 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:47.741423 | orchestrator | 2026-03-29 01:02:47 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:47.741667 | orchestrator | 2026-03-29 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:50.783631 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:50.785482 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:50.787024 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:50.788665 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:50.790333 | orchestrator | 2026-03-29 01:02:50 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:50.790373 | orchestrator | 2026-03-29 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:53.833320 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:53.835895 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:53.836922 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:53.839051 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:53.840597 | orchestrator | 2026-03-29 01:02:53 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:53.840688 | orchestrator | 2026-03-29 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:56.882773 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:56.884689 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:56.886190 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:56.887552 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:56.888947 | orchestrator | 2026-03-29 01:02:56 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:56.888983 | orchestrator | 2026-03-29 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:02:59.939227 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:02:59.943631 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:02:59.948097 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:02:59.948179 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:02:59.950055 | orchestrator | 2026-03-29 01:02:59 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:02:59.950093 | orchestrator | 2026-03-29 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:02.989347 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state STARTED 2026-03-29 01:03:02.990439 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:02.992394 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:02.994163 | orchestrator | 2026-03-29 01:03:02 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:02.998657 | orchestrator | 2026-03-29 01:03:03 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:02.998705 | orchestrator | 2026-03-29 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:06.064703 | orchestrator | 2026-03-29 01:03:06 | INFO  | Task dd5de592-2328-4874-9cc6-0d271c2c072b is in state SUCCESS 2026-03-29 01:03:06.067194 | orchestrator | 2026-03-29 01:03:06 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:06.070502 | orchestrator | 2026-03-29 01:03:06 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:06.072254 | orchestrator | 2026-03-29 01:03:06 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:06.074934 | orchestrator | 2026-03-29 01:03:06 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:06.074994 | orchestrator | 2026-03-29 01:03:06 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:06.075000 | orchestrator | 2026-03-29 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:09.117030 | orchestrator | 2026-03-29 01:03:09 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:09.117982 | orchestrator | 2026-03-29 01:03:09 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:09.122433 | orchestrator | 2026-03-29 01:03:09 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:09.124763 | orchestrator | 2026-03-29 01:03:09 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:09.128996 | orchestrator | 2026-03-29 01:03:09 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:09.129077 | orchestrator | 2026-03-29 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:12.169746 | orchestrator | 2026-03-29 01:03:12 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:12.174220 | orchestrator | 2026-03-29 01:03:12 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:12.176857 | orchestrator | 2026-03-29 01:03:12 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:12.179955 | orchestrator | 2026-03-29 01:03:12 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:12.183341 | orchestrator | 2026-03-29 01:03:12 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:12.183411 | orchestrator | 2026-03-29 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:15.220508 | orchestrator | 2026-03-29 01:03:15 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:15.221945 | orchestrator | 2026-03-29 01:03:15 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:15.223478 | orchestrator | 2026-03-29 01:03:15 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:15.224585 | orchestrator | 2026-03-29 01:03:15 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:15.225802 | orchestrator | 2026-03-29 01:03:15 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:15.225836 | orchestrator | 2026-03-29 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:18.251090 | orchestrator | 2026-03-29 01:03:18 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:18.251204 | orchestrator | 2026-03-29 01:03:18 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:18.251735 | orchestrator | 2026-03-29 01:03:18 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:18.252514 | orchestrator | 2026-03-29 01:03:18 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:18.253573 | orchestrator | 2026-03-29 01:03:18 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:18.253650 | orchestrator | 2026-03-29 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:21.281441 | orchestrator | 2026-03-29 01:03:21 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:21.282673 | orchestrator | 2026-03-29 01:03:21 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:21.282834 | orchestrator | 2026-03-29 01:03:21 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:21.283713 | orchestrator | 2026-03-29 01:03:21 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:21.284774 | orchestrator | 2026-03-29 01:03:21 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:21.284799 | orchestrator | 2026-03-29 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:24.308902 | orchestrator | 2026-03-29 01:03:24 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state STARTED 2026-03-29 01:03:24.309474 | orchestrator | 2026-03-29 01:03:24 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:24.310208 | orchestrator | 2026-03-29 01:03:24 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:24.311092 | orchestrator | 2026-03-29 01:03:24 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:24.311699 | orchestrator | 2026-03-29 01:03:24 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:24.311741 | orchestrator | 2026-03-29 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:27.335301 | orchestrator | 2026-03-29 01:03:27 | INFO  | Task b2d4cb0e-9e5c-4183-a539-6049e71e3f6a is in state SUCCESS 2026-03-29 01:03:27.335357 | orchestrator | 2026-03-29 01:03:27.335363 | orchestrator | 2026-03-29 01:03:27.335380 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-29 01:03:27.335384 | orchestrator | 2026-03-29 01:03:27.335388 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-29 01:03:27.335392 | orchestrator | Sunday 29 March 2026 01:02:09 +0000 (0:00:00.260) 0:00:00.260 ********** 2026-03-29 01:03:27.335396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-29 01:03:27.335401 | orchestrator | 2026-03-29 01:03:27.335405 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-29 01:03:27.335409 | orchestrator | Sunday 29 March 2026 01:02:10 +0000 (0:00:00.212) 0:00:00.473 ********** 2026-03-29 01:03:27.335414 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-29 01:03:27.335418 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-29 01:03:27.335422 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-29 01:03:27.335426 | orchestrator | 2026-03-29 01:03:27.335430 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-29 01:03:27.335434 | orchestrator | Sunday 29 March 2026 01:02:11 +0000 (0:00:01.132) 0:00:01.605 ********** 2026-03-29 01:03:27.335438 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-29 01:03:27.335441 | orchestrator | 2026-03-29 01:03:27.335445 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-29 01:03:27.335455 | orchestrator | Sunday 29 March 2026 01:02:12 +0000 (0:00:01.277) 0:00:02.882 ********** 2026-03-29 01:03:27.335459 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:27.335463 | orchestrator | 2026-03-29 01:03:27.335467 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-29 01:03:27.335471 | orchestrator | Sunday 29 March 2026 01:02:13 +0000 (0:00:00.806) 0:00:03.689 ********** 2026-03-29 01:03:27.335474 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:27.335478 | orchestrator | 2026-03-29 01:03:27.335482 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-29 01:03:27.335486 | orchestrator | Sunday 29 March 2026 01:02:14 +0000 (0:00:00.846) 0:00:04.536 ********** 2026-03-29 01:03:27.335490 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-29 01:03:27.335494 | orchestrator | ok: [testbed-manager] 2026-03-29 01:03:27.335498 | orchestrator | 2026-03-29 01:03:27.335501 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-29 01:03:27.335505 | orchestrator | Sunday 29 March 2026 01:02:53 +0000 (0:00:39.652) 0:00:44.189 ********** 2026-03-29 01:03:27.335509 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-29 01:03:27.335513 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-29 01:03:27.335517 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-29 01:03:27.335521 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-29 01:03:27.335524 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-29 01:03:27.335528 | orchestrator | 2026-03-29 01:03:27.335532 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-29 01:03:27.335536 | orchestrator | Sunday 29 March 2026 01:02:57 +0000 (0:00:03.667) 0:00:47.856 ********** 2026-03-29 01:03:27.335540 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-29 01:03:27.335544 | orchestrator | 2026-03-29 01:03:27.335547 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-29 01:03:27.335551 | orchestrator | Sunday 29 March 2026 01:02:57 +0000 (0:00:00.427) 0:00:48.284 ********** 2026-03-29 01:03:27.335555 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:27.335559 | orchestrator | 2026-03-29 01:03:27.335563 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-29 01:03:27.335567 | orchestrator | Sunday 29 March 2026 01:02:58 +0000 (0:00:00.124) 0:00:48.408 ********** 2026-03-29 01:03:27.335573 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:03:27.335577 | orchestrator | 2026-03-29 01:03:27.335581 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-29 01:03:27.335585 | orchestrator | Sunday 29 March 2026 01:02:58 +0000 (0:00:00.453) 0:00:48.861 ********** 2026-03-29 01:03:27.335589 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:27.335592 | orchestrator | 2026-03-29 01:03:27.335596 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-29 01:03:27.335600 | orchestrator | Sunday 29 March 2026 01:03:00 +0000 (0:00:01.514) 0:00:50.376 ********** 2026-03-29 01:03:27.335604 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:27.335608 | orchestrator | 2026-03-29 01:03:27.335611 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-29 01:03:27.335615 | orchestrator | Sunday 29 March 2026 01:03:00 +0000 (0:00:00.799) 0:00:51.175 ********** 2026-03-29 01:03:27.335619 | orchestrator | changed: [testbed-manager] 2026-03-29 01:03:27.335624 | orchestrator | 2026-03-29 01:03:27.335630 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-29 01:03:27.335634 | orchestrator | Sunday 29 March 2026 01:03:01 +0000 (0:00:00.585) 0:00:51.761 ********** 2026-03-29 01:03:27.335637 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-29 01:03:27.335641 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-29 01:03:27.335645 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-29 01:03:27.335649 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-29 01:03:27.335652 | orchestrator | 2026-03-29 01:03:27.335656 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:03:27.335660 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:03:27.335664 | orchestrator | 2026-03-29 01:03:27.335668 | orchestrator | 2026-03-29 01:03:27.335676 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:03:27.335680 | orchestrator | Sunday 29 March 2026 01:03:03 +0000 (0:00:01.610) 0:00:53.371 ********** 2026-03-29 01:03:27.335684 | orchestrator | =============================================================================== 2026-03-29 01:03:27.335687 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.65s 2026-03-29 01:03:27.335691 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.67s 2026-03-29 01:03:27.335695 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.61s 2026-03-29 01:03:27.335699 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.51s 2026-03-29 01:03:27.335702 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.28s 2026-03-29 01:03:27.335740 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.13s 2026-03-29 01:03:27.335746 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.85s 2026-03-29 01:03:27.335750 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.81s 2026-03-29 01:03:27.335754 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-03-29 01:03:27.335757 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2026-03-29 01:03:27.335761 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.45s 2026-03-29 01:03:27.335765 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2026-03-29 01:03:27.335771 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-03-29 01:03:27.335775 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-03-29 01:03:27.335779 | orchestrator | 2026-03-29 01:03:27.335783 | orchestrator | 2026-03-29 01:03:27.335787 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-29 01:03:27.335791 | orchestrator | 2026-03-29 01:03:27.335794 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-29 01:03:27.335802 | orchestrator | Sunday 29 March 2026 01:02:36 +0000 (0:00:00.111) 0:00:00.111 ********** 2026-03-29 01:03:27.335805 | orchestrator | changed: [localhost] 2026-03-29 01:03:27.335809 | orchestrator | 2026-03-29 01:03:27.335813 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-29 01:03:27.335817 | orchestrator | Sunday 29 March 2026 01:02:37 +0000 (0:00:00.769) 0:00:00.881 ********** 2026-03-29 01:03:27.335821 | orchestrator | changed: [localhost] 2026-03-29 01:03:27.335824 | orchestrator | 2026-03-29 01:03:27.335828 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-29 01:03:27.335832 | orchestrator | Sunday 29 March 2026 01:03:19 +0000 (0:00:42.714) 0:00:43.596 ********** 2026-03-29 01:03:27.335836 | orchestrator | changed: [localhost] 2026-03-29 01:03:27.335839 | orchestrator | 2026-03-29 01:03:27.335843 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:03:27.335847 | orchestrator | 2026-03-29 01:03:27.335851 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:03:27.335855 | orchestrator | Sunday 29 March 2026 01:03:24 +0000 (0:00:05.079) 0:00:48.675 ********** 2026-03-29 01:03:27.335858 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:03:27.335862 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:03:27.335866 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:03:27.335870 | orchestrator | 2026-03-29 01:03:27.335874 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:03:27.335877 | orchestrator | Sunday 29 March 2026 01:03:25 +0000 (0:00:00.314) 0:00:48.990 ********** 2026-03-29 01:03:27.335881 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-29 01:03:27.335885 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-29 01:03:27.335889 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-29 01:03:27.335893 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-29 01:03:27.335896 | orchestrator | 2026-03-29 01:03:27.335900 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-29 01:03:27.335904 | orchestrator | skipping: no hosts matched 2026-03-29 01:03:27.335908 | orchestrator | 2026-03-29 01:03:27.335911 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:03:27.335915 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:03:27.335922 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:03:27.335929 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:03:27.335935 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:03:27.335940 | orchestrator | 2026-03-29 01:03:27.335948 | orchestrator | 2026-03-29 01:03:27.335954 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:03:27.335960 | orchestrator | Sunday 29 March 2026 01:03:25 +0000 (0:00:00.493) 0:00:49.483 ********** 2026-03-29 01:03:27.335966 | orchestrator | =============================================================================== 2026-03-29 01:03:27.335972 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 42.71s 2026-03-29 01:03:27.335978 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.08s 2026-03-29 01:03:27.335984 | orchestrator | Ensure the destination directory exists --------------------------------- 0.77s 2026-03-29 01:03:27.335990 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-03-29 01:03:27.335999 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-29 01:03:27.336006 | orchestrator | 2026-03-29 01:03:27 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:27.336725 | orchestrator | 2026-03-29 01:03:27 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:27.338584 | orchestrator | 2026-03-29 01:03:27 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:27.339257 | orchestrator | 2026-03-29 01:03:27 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:27.339918 | orchestrator | 2026-03-29 01:03:27 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:27.340031 | orchestrator | 2026-03-29 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:30.366521 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:30.366947 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:30.367792 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:30.368494 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:30.370113 | orchestrator | 2026-03-29 01:03:30 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:30.370153 | orchestrator | 2026-03-29 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:33.391435 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:33.391876 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:33.392466 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:33.393207 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:33.393738 | orchestrator | 2026-03-29 01:03:33 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:33.393974 | orchestrator | 2026-03-29 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:36.413219 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:36.413713 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:36.414361 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:36.415082 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:36.415778 | orchestrator | 2026-03-29 01:03:36 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:36.415858 | orchestrator | 2026-03-29 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:39.437544 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:39.437728 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:39.438436 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:39.439056 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:39.439927 | orchestrator | 2026-03-29 01:03:39 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:39.440002 | orchestrator | 2026-03-29 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:42.467605 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:42.468196 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:42.469021 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:42.469710 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:42.470406 | orchestrator | 2026-03-29 01:03:42 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:42.470632 | orchestrator | 2026-03-29 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:45.495630 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:45.496264 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:45.497013 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:45.497738 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:45.498393 | orchestrator | 2026-03-29 01:03:45 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:45.498597 | orchestrator | 2026-03-29 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:48.555051 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:48.555125 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:48.555145 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:48.555152 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:48.555159 | orchestrator | 2026-03-29 01:03:48 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:48.555166 | orchestrator | 2026-03-29 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:51.573056 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:51.573412 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:51.574942 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:51.575479 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:51.576275 | orchestrator | 2026-03-29 01:03:51 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:51.576299 | orchestrator | 2026-03-29 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:54.603414 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:54.605149 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:54.606627 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:54.608645 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:54.609946 | orchestrator | 2026-03-29 01:03:54 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:54.609971 | orchestrator | 2026-03-29 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:03:57.637169 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:03:57.637637 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:03:57.638637 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:03:57.641006 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:03:57.641675 | orchestrator | 2026-03-29 01:03:57 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:03:57.641737 | orchestrator | 2026-03-29 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:00.670943 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:00.671762 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:00.673458 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:00.674199 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:00.675099 | orchestrator | 2026-03-29 01:04:00 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:04:00.675145 | orchestrator | 2026-03-29 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:03.709730 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:03.710595 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:03.710641 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:03.711372 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:03.711953 | orchestrator | 2026-03-29 01:04:03 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:04:03.712037 | orchestrator | 2026-03-29 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:06.735477 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:06.736343 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:06.737727 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:06.739479 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:06.740314 | orchestrator | 2026-03-29 01:04:06 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:04:06.740358 | orchestrator | 2026-03-29 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:09.794420 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:09.794488 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:09.794495 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:09.796963 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:09.797021 | orchestrator | 2026-03-29 01:04:09 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:04:09.797028 | orchestrator | 2026-03-29 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:12.816234 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:12.816401 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:12.817222 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:12.817643 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:12.818490 | orchestrator | 2026-03-29 01:04:12 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:04:12.818521 | orchestrator | 2026-03-29 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:15.845977 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:15.847013 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:15.847705 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:15.848914 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:15.849303 | orchestrator | 2026-03-29 01:04:15 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state STARTED 2026-03-29 01:04:15.849350 | orchestrator | 2026-03-29 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:18.873103 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:18.873500 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:18.874442 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:18.874509 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:18.875209 | orchestrator | 2026-03-29 01:04:18 | INFO  | Task 036ce05a-e548-4e95-9774-13b6e25a846b is in state SUCCESS 2026-03-29 01:04:18.875231 | orchestrator | 2026-03-29 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:21.911379 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:21.916704 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:21.917523 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:21.919437 | orchestrator | 2026-03-29 01:04:21 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:21.919667 | orchestrator | 2026-03-29 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:24.947408 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:24.949652 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:24.951087 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:24.952895 | orchestrator | 2026-03-29 01:04:24 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:24.953325 | orchestrator | 2026-03-29 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:27.981258 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:27.981543 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:27.982368 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:27.983986 | orchestrator | 2026-03-29 01:04:27 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:27.984018 | orchestrator | 2026-03-29 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:31.022719 | orchestrator | 2026-03-29 01:04:31 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:31.022982 | orchestrator | 2026-03-29 01:04:31 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:31.023740 | orchestrator | 2026-03-29 01:04:31 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:31.024736 | orchestrator | 2026-03-29 01:04:31 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:31.024769 | orchestrator | 2026-03-29 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:34.058849 | orchestrator | 2026-03-29 01:04:34 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:34.061735 | orchestrator | 2026-03-29 01:04:34 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:34.064449 | orchestrator | 2026-03-29 01:04:34 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:34.066510 | orchestrator | 2026-03-29 01:04:34 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:34.067046 | orchestrator | 2026-03-29 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:37.097903 | orchestrator | 2026-03-29 01:04:37 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:37.098130 | orchestrator | 2026-03-29 01:04:37 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:37.098838 | orchestrator | 2026-03-29 01:04:37 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state STARTED 2026-03-29 01:04:37.099550 | orchestrator | 2026-03-29 01:04:37 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:37.099567 | orchestrator | 2026-03-29 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:40.127268 | orchestrator | 2026-03-29 01:04:40 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:40.128112 | orchestrator | 2026-03-29 01:04:40 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:40.129446 | orchestrator | 2026-03-29 01:04:40 | INFO  | Task 2174adc4-1683-4730-b433-926347d75d57 is in state SUCCESS 2026-03-29 01:04:40.129652 | orchestrator | 2026-03-29 01:04:40.129674 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-29 01:04:40.129728 | orchestrator | 2.16.14 2026-03-29 01:04:40.129738 | orchestrator | 2026-03-29 01:04:40.129745 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-29 01:04:40.129753 | orchestrator | 2026-03-29 01:04:40.129760 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-29 01:04:40.129767 | orchestrator | Sunday 29 March 2026 01:03:07 +0000 (0:00:00.257) 0:00:00.257 ********** 2026-03-29 01:04:40.129774 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.129782 | orchestrator | 2026-03-29 01:04:40.129789 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-29 01:04:40.129797 | orchestrator | Sunday 29 March 2026 01:03:08 +0000 (0:00:01.386) 0:00:01.644 ********** 2026-03-29 01:04:40.129804 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.129812 | orchestrator | 2026-03-29 01:04:40.129819 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-29 01:04:40.129825 | orchestrator | Sunday 29 March 2026 01:03:09 +0000 (0:00:01.003) 0:00:02.647 ********** 2026-03-29 01:04:40.129831 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.129838 | orchestrator | 2026-03-29 01:04:40.129844 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-29 01:04:40.129851 | orchestrator | Sunday 29 March 2026 01:03:11 +0000 (0:00:01.138) 0:00:03.785 ********** 2026-03-29 01:04:40.129858 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.129864 | orchestrator | 2026-03-29 01:04:40.129871 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-29 01:04:40.129890 | orchestrator | Sunday 29 March 2026 01:03:12 +0000 (0:00:01.269) 0:00:05.055 ********** 2026-03-29 01:04:40.129919 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.129928 | orchestrator | 2026-03-29 01:04:40.129935 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-29 01:04:40.129942 | orchestrator | Sunday 29 March 2026 01:03:13 +0000 (0:00:01.609) 0:00:06.665 ********** 2026-03-29 01:04:40.129948 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.129955 | orchestrator | 2026-03-29 01:04:40.129962 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-29 01:04:40.129969 | orchestrator | Sunday 29 March 2026 01:03:15 +0000 (0:00:01.312) 0:00:07.977 ********** 2026-03-29 01:04:40.129976 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.129983 | orchestrator | 2026-03-29 01:04:40.129990 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-29 01:04:40.129997 | orchestrator | Sunday 29 March 2026 01:03:17 +0000 (0:00:02.024) 0:00:10.002 ********** 2026-03-29 01:04:40.130169 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.130331 | orchestrator | 2026-03-29 01:04:40.130341 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-29 01:04:40.130349 | orchestrator | Sunday 29 March 2026 01:03:18 +0000 (0:00:00.945) 0:00:10.947 ********** 2026-03-29 01:04:40.130355 | orchestrator | changed: [testbed-manager] 2026-03-29 01:04:40.130363 | orchestrator | 2026-03-29 01:04:40.130370 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-29 01:04:40.130378 | orchestrator | Sunday 29 March 2026 01:03:52 +0000 (0:00:34.754) 0:00:45.702 ********** 2026-03-29 01:04:40.130386 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:04:40.130394 | orchestrator | 2026-03-29 01:04:40.130403 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 01:04:40.130410 | orchestrator | 2026-03-29 01:04:40.130417 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 01:04:40.130425 | orchestrator | Sunday 29 March 2026 01:03:53 +0000 (0:00:00.136) 0:00:45.838 ********** 2026-03-29 01:04:40.130431 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.130438 | orchestrator | 2026-03-29 01:04:40.130445 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 01:04:40.130452 | orchestrator | 2026-03-29 01:04:40.130459 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 01:04:40.130478 | orchestrator | Sunday 29 March 2026 01:04:04 +0000 (0:00:11.431) 0:00:57.270 ********** 2026-03-29 01:04:40.130485 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:04:40.130492 | orchestrator | 2026-03-29 01:04:40.130499 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-29 01:04:40.130505 | orchestrator | 2026-03-29 01:04:40.130512 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-29 01:04:40.130520 | orchestrator | Sunday 29 March 2026 01:04:15 +0000 (0:00:11.434) 0:01:08.704 ********** 2026-03-29 01:04:40.130527 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:04:40.130534 | orchestrator | 2026-03-29 01:04:40.130542 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:04:40.130550 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-29 01:04:40.130558 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:04:40.130566 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:04:40.130573 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:04:40.130723 | orchestrator | 2026-03-29 01:04:40.130733 | orchestrator | 2026-03-29 01:04:40.130740 | orchestrator | 2026-03-29 01:04:40.130747 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:04:40.130754 | orchestrator | Sunday 29 March 2026 01:04:17 +0000 (0:00:01.009) 0:01:09.714 ********** 2026-03-29 01:04:40.130761 | orchestrator | =============================================================================== 2026-03-29 01:04:40.130769 | orchestrator | Create admin user ------------------------------------------------------ 34.75s 2026-03-29 01:04:40.130844 | orchestrator | Restart ceph manager service ------------------------------------------- 23.88s 2026-03-29 01:04:40.130854 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.02s 2026-03-29 01:04:40.130860 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.61s 2026-03-29 01:04:40.130894 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.39s 2026-03-29 01:04:40.130901 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.31s 2026-03-29 01:04:40.130907 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.27s 2026-03-29 01:04:40.130913 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2026-03-29 01:04:40.130919 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.00s 2026-03-29 01:04:40.130925 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.95s 2026-03-29 01:04:40.130930 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-03-29 01:04:40.130936 | orchestrator | 2026-03-29 01:04:40.131334 | orchestrator | 2026-03-29 01:04:40.131361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:04:40.131369 | orchestrator | 2026-03-29 01:04:40.131376 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:04:40.131383 | orchestrator | Sunday 29 March 2026 01:02:36 +0000 (0:00:00.239) 0:00:00.239 ********** 2026-03-29 01:04:40.131390 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:04:40.131398 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:04:40.131405 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:04:40.131412 | orchestrator | 2026-03-29 01:04:40.131431 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:04:40.131438 | orchestrator | Sunday 29 March 2026 01:02:36 +0000 (0:00:00.275) 0:00:00.515 ********** 2026-03-29 01:04:40.131446 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-29 01:04:40.131463 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-29 01:04:40.131470 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-29 01:04:40.131477 | orchestrator | 2026-03-29 01:04:40.131484 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-29 01:04:40.131491 | orchestrator | 2026-03-29 01:04:40.131498 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 01:04:40.131504 | orchestrator | Sunday 29 March 2026 01:02:37 +0000 (0:00:00.393) 0:00:00.909 ********** 2026-03-29 01:04:40.131511 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:04:40.131519 | orchestrator | 2026-03-29 01:04:40.131526 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-29 01:04:40.131533 | orchestrator | Sunday 29 March 2026 01:02:37 +0000 (0:00:00.599) 0:00:01.508 ********** 2026-03-29 01:04:40.131541 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-29 01:04:40.131548 | orchestrator | 2026-03-29 01:04:40.131555 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-29 01:04:40.131607 | orchestrator | Sunday 29 March 2026 01:02:41 +0000 (0:00:03.476) 0:00:04.985 ********** 2026-03-29 01:04:40.131614 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-29 01:04:40.131622 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-29 01:04:40.131629 | orchestrator | 2026-03-29 01:04:40.131636 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-29 01:04:40.131642 | orchestrator | Sunday 29 March 2026 01:02:48 +0000 (0:00:06.897) 0:00:11.883 ********** 2026-03-29 01:04:40.131649 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:04:40.131657 | orchestrator | 2026-03-29 01:04:40.131663 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-29 01:04:40.131670 | orchestrator | Sunday 29 March 2026 01:02:51 +0000 (0:00:03.673) 0:00:15.557 ********** 2026-03-29 01:04:40.131677 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:04:40.131684 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-29 01:04:40.131691 | orchestrator | 2026-03-29 01:04:40.131698 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-29 01:04:40.131705 | orchestrator | Sunday 29 March 2026 01:02:55 +0000 (0:00:03.813) 0:00:19.370 ********** 2026-03-29 01:04:40.131712 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:04:40.131720 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-29 01:04:40.131728 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-29 01:04:40.131734 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-29 01:04:40.131740 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-29 01:04:40.131746 | orchestrator | 2026-03-29 01:04:40.131753 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-29 01:04:40.131760 | orchestrator | Sunday 29 March 2026 01:03:13 +0000 (0:00:18.058) 0:00:37.428 ********** 2026-03-29 01:04:40.131766 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-29 01:04:40.131773 | orchestrator | 2026-03-29 01:04:40.131780 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-29 01:04:40.131786 | orchestrator | Sunday 29 March 2026 01:03:17 +0000 (0:00:03.331) 0:00:40.759 ********** 2026-03-29 01:04:40.131796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.131830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.131845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.131866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.131884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.131892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.131914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.131930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.131940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.131947 | orchestrator | 2026-03-29 01:04:40.131956 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-29 01:04:40.131965 | orchestrator | Sunday 29 March 2026 01:03:18 +0000 (0:00:01.809) 0:00:42.569 ********** 2026-03-29 01:04:40.131974 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-29 01:04:40.131982 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-29 01:04:40.131991 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-29 01:04:40.132000 | orchestrator | 2026-03-29 01:04:40.132030 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-29 01:04:40.132039 | orchestrator | Sunday 29 March 2026 01:03:20 +0000 (0:00:01.259) 0:00:43.829 ********** 2026-03-29 01:04:40.132046 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:40.132054 | orchestrator | 2026-03-29 01:04:40.132062 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-29 01:04:40.132071 | orchestrator | Sunday 29 March 2026 01:03:20 +0000 (0:00:00.137) 0:00:43.967 ********** 2026-03-29 01:04:40.132079 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:40.132087 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:40.132095 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:40.132103 | orchestrator | 2026-03-29 01:04:40.132110 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 01:04:40.132119 | orchestrator | Sunday 29 March 2026 01:03:21 +0000 (0:00:00.645) 0:00:44.612 ********** 2026-03-29 01:04:40.132127 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:04:40.132143 | orchestrator | 2026-03-29 01:04:40.132153 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-29 01:04:40.132161 | orchestrator | Sunday 29 March 2026 01:03:21 +0000 (0:00:00.890) 0:00:45.502 ********** 2026-03-29 01:04:40.132169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132280 | orchestrator | 2026-03-29 01:04:40.132289 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-29 01:04:40.132297 | orchestrator | Sunday 29 March 2026 01:03:25 +0000 (0:00:03.945) 0:00:49.447 ********** 2026-03-29 01:04:40.132306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.132330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132348 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:40.132368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.132378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132396 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:40.132405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.132420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132438 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:40.132447 | orchestrator | 2026-03-29 01:04:40.132460 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-29 01:04:40.132470 | orchestrator | Sunday 29 March 2026 01:03:27 +0000 (0:00:02.091) 0:00:51.539 ********** 2026-03-29 01:04:40.132484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.132493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132516 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:40.132525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.132534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132561 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:40.132570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.132585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.132603 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:40.132611 | orchestrator | 2026-03-29 01:04:40.132619 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-29 01:04:40.132625 | orchestrator | Sunday 29 March 2026 01:03:29 +0000 (0:00:01.348) 0:00:52.888 ********** 2026-03-29 01:04:40.132634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132743 | orchestrator | 2026-03-29 01:04:40.132752 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-29 01:04:40.132760 | orchestrator | Sunday 29 March 2026 01:03:32 +0000 (0:00:03.547) 0:00:56.436 ********** 2026-03-29 01:04:40.132768 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.132776 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:04:40.132785 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:04:40.132794 | orchestrator | 2026-03-29 01:04:40.132803 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-29 01:04:40.132812 | orchestrator | Sunday 29 March 2026 01:03:35 +0000 (0:00:02.726) 0:00:59.163 ********** 2026-03-29 01:04:40.132820 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:04:40.132829 | orchestrator | 2026-03-29 01:04:40.132839 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-29 01:04:40.132848 | orchestrator | Sunday 29 March 2026 01:03:36 +0000 (0:00:01.233) 0:01:00.396 ********** 2026-03-29 01:04:40.132857 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:40.132865 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:40.132875 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:40.132882 | orchestrator | 2026-03-29 01:04:40.132890 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-29 01:04:40.132898 | orchestrator | Sunday 29 March 2026 01:03:37 +0000 (0:00:00.465) 0:01:00.862 ********** 2026-03-29 01:04:40.132907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.132953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.132995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133080 | orchestrator | 2026-03-29 01:04:40.133089 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-29 01:04:40.133095 | orchestrator | Sunday 29 March 2026 01:03:47 +0000 (0:00:09.853) 0:01:10.715 ********** 2026-03-29 01:04:40.133102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.133108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.133115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.133121 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:40.133132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.133146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.133153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.133159 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:40.133167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-29 01:04:40.133174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.133181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:04:40.133187 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:40.133193 | orchestrator | 2026-03-29 01:04:40.133200 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-29 01:04:40.133205 | orchestrator | Sunday 29 March 2026 01:03:48 +0000 (0:00:01.011) 0:01:11.727 ********** 2026-03-29 01:04:40.133222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.133234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.133241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:40.133247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:04:40.133302 | orchestrator | 2026-03-29 01:04:40.133309 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-29 01:04:40.133316 | orchestrator | Sunday 29 March 2026 01:03:51 +0000 (0:00:03.494) 0:01:15.221 ********** 2026-03-29 01:04:40.133322 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:40.133328 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:40.133334 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:40.133340 | orchestrator | 2026-03-29 01:04:40.133347 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-29 01:04:40.133354 | orchestrator | Sunday 29 March 2026 01:03:52 +0000 (0:00:00.711) 0:01:15.933 ********** 2026-03-29 01:04:40.133360 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.133367 | orchestrator | 2026-03-29 01:04:40.133373 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-29 01:04:40.133379 | orchestrator | Sunday 29 March 2026 01:03:54 +0000 (0:00:02.181) 0:01:18.115 ********** 2026-03-29 01:04:40.133385 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.133392 | orchestrator | 2026-03-29 01:04:40.133398 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-29 01:04:40.133405 | orchestrator | Sunday 29 March 2026 01:03:56 +0000 (0:00:02.065) 0:01:20.180 ********** 2026-03-29 01:04:40.133417 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.133424 | orchestrator | 2026-03-29 01:04:40.133430 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 01:04:40.133437 | orchestrator | Sunday 29 March 2026 01:04:07 +0000 (0:00:11.285) 0:01:31.466 ********** 2026-03-29 01:04:40.133444 | orchestrator | 2026-03-29 01:04:40.133451 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 01:04:40.133458 | orchestrator | Sunday 29 March 2026 01:04:08 +0000 (0:00:00.160) 0:01:31.626 ********** 2026-03-29 01:04:40.133464 | orchestrator | 2026-03-29 01:04:40.133472 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-29 01:04:40.133478 | orchestrator | Sunday 29 March 2026 01:04:08 +0000 (0:00:00.173) 0:01:31.800 ********** 2026-03-29 01:04:40.133485 | orchestrator | 2026-03-29 01:04:40.133492 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-29 01:04:40.133500 | orchestrator | Sunday 29 March 2026 01:04:08 +0000 (0:00:00.197) 0:01:31.997 ********** 2026-03-29 01:04:40.133506 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.133514 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:04:40.133522 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:04:40.133529 | orchestrator | 2026-03-29 01:04:40.133536 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-29 01:04:40.133544 | orchestrator | Sunday 29 March 2026 01:04:21 +0000 (0:00:13.574) 0:01:45.572 ********** 2026-03-29 01:04:40.133552 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.133561 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:04:40.133575 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:04:40.133582 | orchestrator | 2026-03-29 01:04:40.133589 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-29 01:04:40.133598 | orchestrator | Sunday 29 March 2026 01:04:27 +0000 (0:00:05.307) 0:01:50.879 ********** 2026-03-29 01:04:40.133606 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:40.133615 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:04:40.133623 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:04:40.133630 | orchestrator | 2026-03-29 01:04:40.133638 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:04:40.133653 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:04:40.133663 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:04:40.133670 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:04:40.133678 | orchestrator | 2026-03-29 01:04:40.133687 | orchestrator | 2026-03-29 01:04:40.133695 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:04:40.133702 | orchestrator | Sunday 29 March 2026 01:04:38 +0000 (0:00:11.391) 0:02:02.271 ********** 2026-03-29 01:04:40.133709 | orchestrator | =============================================================================== 2026-03-29 01:04:40.133717 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.06s 2026-03-29 01:04:40.133726 | orchestrator | barbican : Restart barbican-api container ------------------------------ 13.57s 2026-03-29 01:04:40.133760 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.39s 2026-03-29 01:04:40.133768 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.29s 2026-03-29 01:04:40.133775 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.85s 2026-03-29 01:04:40.133783 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.90s 2026-03-29 01:04:40.133791 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.31s 2026-03-29 01:04:40.133799 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.95s 2026-03-29 01:04:40.133813 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.81s 2026-03-29 01:04:40.133822 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.67s 2026-03-29 01:04:40.133831 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.55s 2026-03-29 01:04:40.133836 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.49s 2026-03-29 01:04:40.133842 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.48s 2026-03-29 01:04:40.133850 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.33s 2026-03-29 01:04:40.133858 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.73s 2026-03-29 01:04:40.133867 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.18s 2026-03-29 01:04:40.133875 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.09s 2026-03-29 01:04:40.133883 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.07s 2026-03-29 01:04:40.133891 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.81s 2026-03-29 01:04:40.133899 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.35s 2026-03-29 01:04:40.133906 | orchestrator | 2026-03-29 01:04:40 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:40.133915 | orchestrator | 2026-03-29 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:43.159434 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:43.159777 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:04:43.160635 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:43.161274 | orchestrator | 2026-03-29 01:04:43 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:43.161301 | orchestrator | 2026-03-29 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:46.191125 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:46.191177 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:04:46.191828 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:46.192446 | orchestrator | 2026-03-29 01:04:46 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state STARTED 2026-03-29 01:04:46.192693 | orchestrator | 2026-03-29 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:49.227620 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task c98cb759-6f8f-4611-b0d6-af4f5f3ac4a5 is in state STARTED 2026-03-29 01:04:49.227978 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:49.228662 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:04:49.231090 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:49.232041 | orchestrator | 2026-03-29 01:04:49.232069 | orchestrator | 2026-03-29 01:04:49 | INFO  | Task 0e23e958-f8a5-4f63-8c44-baed85d6b6a8 is in state SUCCESS 2026-03-29 01:04:49.233329 | orchestrator | 2026-03-29 01:04:49.233352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:04:49.233357 | orchestrator | 2026-03-29 01:04:49.233361 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:04:49.233377 | orchestrator | Sunday 29 March 2026 01:03:33 +0000 (0:00:00.679) 0:00:00.681 ********** 2026-03-29 01:04:49.233381 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:04:49.233386 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:04:49.233390 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:04:49.233393 | orchestrator | 2026-03-29 01:04:49.233397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:04:49.233401 | orchestrator | Sunday 29 March 2026 01:03:34 +0000 (0:00:00.515) 0:00:01.197 ********** 2026-03-29 01:04:49.233406 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-29 01:04:49.233413 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-29 01:04:49.233419 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-29 01:04:49.233425 | orchestrator | 2026-03-29 01:04:49.233430 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-29 01:04:49.233436 | orchestrator | 2026-03-29 01:04:49.233443 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 01:04:49.233449 | orchestrator | Sunday 29 March 2026 01:03:34 +0000 (0:00:00.515) 0:00:01.712 ********** 2026-03-29 01:04:49.233455 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:04:49.233461 | orchestrator | 2026-03-29 01:04:49.233469 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-29 01:04:49.233473 | orchestrator | Sunday 29 March 2026 01:03:35 +0000 (0:00:01.045) 0:00:02.758 ********** 2026-03-29 01:04:49.233484 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-29 01:04:49.233488 | orchestrator | 2026-03-29 01:04:49.233492 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-29 01:04:49.233496 | orchestrator | Sunday 29 March 2026 01:03:39 +0000 (0:00:04.008) 0:00:06.766 ********** 2026-03-29 01:04:49.233500 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-29 01:04:49.233504 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-29 01:04:49.233508 | orchestrator | 2026-03-29 01:04:49.233512 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-29 01:04:49.233516 | orchestrator | Sunday 29 March 2026 01:03:47 +0000 (0:00:07.704) 0:00:14.470 ********** 2026-03-29 01:04:49.233520 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:04:49.233524 | orchestrator | 2026-03-29 01:04:49.233527 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-29 01:04:49.233531 | orchestrator | Sunday 29 March 2026 01:03:50 +0000 (0:00:03.285) 0:00:17.755 ********** 2026-03-29 01:04:49.233535 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:04:49.233539 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-29 01:04:49.233547 | orchestrator | 2026-03-29 01:04:49.233551 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-29 01:04:49.233560 | orchestrator | Sunday 29 March 2026 01:03:54 +0000 (0:00:03.708) 0:00:21.464 ********** 2026-03-29 01:04:49.233563 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:04:49.233571 | orchestrator | 2026-03-29 01:04:49.233574 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-29 01:04:49.233578 | orchestrator | Sunday 29 March 2026 01:03:57 +0000 (0:00:03.630) 0:00:25.095 ********** 2026-03-29 01:04:49.233582 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-29 01:04:49.233586 | orchestrator | 2026-03-29 01:04:49.233590 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 01:04:49.233593 | orchestrator | Sunday 29 March 2026 01:04:02 +0000 (0:00:04.146) 0:00:29.241 ********** 2026-03-29 01:04:49.233597 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:49.233601 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:49.233612 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:49.233617 | orchestrator | 2026-03-29 01:04:49.233623 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-29 01:04:49.233630 | orchestrator | Sunday 29 March 2026 01:04:02 +0000 (0:00:00.253) 0:00:29.495 ********** 2026-03-29 01:04:49.233646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233675 | orchestrator | 2026-03-29 01:04:49.233679 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-29 01:04:49.233684 | orchestrator | Sunday 29 March 2026 01:04:03 +0000 (0:00:00.762) 0:00:30.258 ********** 2026-03-29 01:04:49.233691 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:49.233697 | orchestrator | 2026-03-29 01:04:49.233704 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-29 01:04:49.233708 | orchestrator | Sunday 29 March 2026 01:04:03 +0000 (0:00:00.166) 0:00:30.424 ********** 2026-03-29 01:04:49.233712 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:49.233716 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:49.233720 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:49.233723 | orchestrator | 2026-03-29 01:04:49.233727 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-29 01:04:49.233731 | orchestrator | Sunday 29 March 2026 01:04:03 +0000 (0:00:00.725) 0:00:31.150 ********** 2026-03-29 01:04:49.233738 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:04:49.233742 | orchestrator | 2026-03-29 01:04:49.233746 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-29 01:04:49.233750 | orchestrator | Sunday 29 March 2026 01:04:04 +0000 (0:00:00.480) 0:00:31.631 ********** 2026-03-29 01:04:49.233754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233773 | orchestrator | 2026-03-29 01:04:49.233777 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-29 01:04:49.233781 | orchestrator | Sunday 29 March 2026 01:04:06 +0000 (0:00:01.873) 0:00:33.505 ********** 2026-03-29 01:04:49.233785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.233791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:49.233795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.233799 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:49.233808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.233812 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:49.233816 | orchestrator | 2026-03-29 01:04:49.233820 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-29 01:04:49.233824 | orchestrator | Sunday 29 March 2026 01:04:07 +0000 (0:00:00.708) 0:00:34.214 ********** 2026-03-29 01:04:49.233828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.233832 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:49.233836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.233842 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:49.233846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.233852 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:49.233858 | orchestrator | 2026-03-29 01:04:49.233863 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-29 01:04:49.233875 | orchestrator | Sunday 29 March 2026 01:04:07 +0000 (0:00:00.686) 0:00:34.900 ********** 2026-03-29 01:04:49.233888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233913 | orchestrator | 2026-03-29 01:04:49.233920 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-29 01:04:49.233924 | orchestrator | Sunday 29 March 2026 01:04:10 +0000 (0:00:02.303) 0:00:37.204 ********** 2026-03-29 01:04:49.233928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.233947 | orchestrator | 2026-03-29 01:04:49.233951 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-29 01:04:49.233956 | orchestrator | Sunday 29 March 2026 01:04:14 +0000 (0:00:04.158) 0:00:41.362 ********** 2026-03-29 01:04:49.233960 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 01:04:49.233967 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 01:04:49.233972 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-29 01:04:49.233976 | orchestrator | 2026-03-29 01:04:49.233980 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-29 01:04:49.233987 | orchestrator | Sunday 29 March 2026 01:04:15 +0000 (0:00:01.518) 0:00:42.880 ********** 2026-03-29 01:04:49.234062 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:04:49.234070 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:49.234076 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:04:49.234084 | orchestrator | 2026-03-29 01:04:49.234090 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-29 01:04:49.234095 | orchestrator | Sunday 29 March 2026 01:04:17 +0000 (0:00:01.374) 0:00:44.255 ********** 2026-03-29 01:04:49.234099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.234104 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:04:49.234109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.234113 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:04:49.234126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-29 01:04:49.234131 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:04:49.234148 | orchestrator | 2026-03-29 01:04:49.234152 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-29 01:04:49.234161 | orchestrator | Sunday 29 March 2026 01:04:17 +0000 (0:00:00.570) 0:00:44.825 ********** 2026-03-29 01:04:49.234166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.234171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.234175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-29 01:04:49.234181 | orchestrator | 2026-03-29 01:04:49.234187 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-29 01:04:49.234194 | orchestrator | Sunday 29 March 2026 01:04:18 +0000 (0:00:01.053) 0:00:45.879 ********** 2026-03-29 01:04:49.234204 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:49.234210 | orchestrator | 2026-03-29 01:04:49.234216 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-29 01:04:49.234222 | orchestrator | Sunday 29 March 2026 01:04:21 +0000 (0:00:02.793) 0:00:48.672 ********** 2026-03-29 01:04:49.234230 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:49.234236 | orchestrator | 2026-03-29 01:04:49.234242 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-29 01:04:49.234248 | orchestrator | Sunday 29 March 2026 01:04:23 +0000 (0:00:02.478) 0:00:51.150 ********** 2026-03-29 01:04:49.234258 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:49.234264 | orchestrator | 2026-03-29 01:04:49.234270 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 01:04:49.234282 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:11.801) 0:01:02.951 ********** 2026-03-29 01:04:49.234289 | orchestrator | 2026-03-29 01:04:49.234295 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 01:04:49.234301 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:00.130) 0:01:03.082 ********** 2026-03-29 01:04:49.234306 | orchestrator | 2026-03-29 01:04:49.234312 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-29 01:04:49.234318 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:00.064) 0:01:03.146 ********** 2026-03-29 01:04:49.234324 | orchestrator | 2026-03-29 01:04:49.234330 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-29 01:04:49.234336 | orchestrator | Sunday 29 March 2026 01:04:36 +0000 (0:00:00.131) 0:01:03.277 ********** 2026-03-29 01:04:49.234342 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:04:49.234348 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:04:49.234355 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:04:49.234361 | orchestrator | 2026-03-29 01:04:49.234368 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:04:49.234372 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:04:49.234377 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:04:49.234381 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:04:49.234385 | orchestrator | 2026-03-29 01:04:49.234388 | orchestrator | 2026-03-29 01:04:49.234392 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:04:49.234396 | orchestrator | Sunday 29 March 2026 01:04:46 +0000 (0:00:10.771) 0:01:14.049 ********** 2026-03-29 01:04:49.234400 | orchestrator | =============================================================================== 2026-03-29 01:04:49.234403 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.80s 2026-03-29 01:04:49.234407 | orchestrator | placement : Restart placement-api container ---------------------------- 10.77s 2026-03-29 01:04:49.234411 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.70s 2026-03-29 01:04:49.234415 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.16s 2026-03-29 01:04:49.234418 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.15s 2026-03-29 01:04:49.234422 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.01s 2026-03-29 01:04:49.234426 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.71s 2026-03-29 01:04:49.234430 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.63s 2026-03-29 01:04:49.234433 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.29s 2026-03-29 01:04:49.234437 | orchestrator | placement : Creating placement databases -------------------------------- 2.79s 2026-03-29 01:04:49.234441 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.48s 2026-03-29 01:04:49.234444 | orchestrator | placement : Copying over config.json files for services ----------------- 2.30s 2026-03-29 01:04:49.234448 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.87s 2026-03-29 01:04:49.234452 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.52s 2026-03-29 01:04:49.234456 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.37s 2026-03-29 01:04:49.234459 | orchestrator | placement : Check placement containers ---------------------------------- 1.05s 2026-03-29 01:04:49.234463 | orchestrator | placement : include_tasks ----------------------------------------------- 1.05s 2026-03-29 01:04:49.234467 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.76s 2026-03-29 01:04:49.234476 | orchestrator | placement : Set placement policy file ----------------------------------- 0.73s 2026-03-29 01:04:49.234480 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.71s 2026-03-29 01:04:49.234484 | orchestrator | 2026-03-29 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:52.259307 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task c98cb759-6f8f-4611-b0d6-af4f5f3ac4a5 is in state STARTED 2026-03-29 01:04:52.261450 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:52.263373 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:04:52.265454 | orchestrator | 2026-03-29 01:04:52 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:52.265659 | orchestrator | 2026-03-29 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:55.300137 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task c98cb759-6f8f-4611-b0d6-af4f5f3ac4a5 is in state SUCCESS 2026-03-29 01:04:55.300413 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:55.301005 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:04:55.301702 | orchestrator | 2026-03-29 01:04:55 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:55.301729 | orchestrator | 2026-03-29 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:04:58.333240 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:04:58.333890 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:04:58.334732 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:04:58.335673 | orchestrator | 2026-03-29 01:04:58 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:04:58.335699 | orchestrator | 2026-03-29 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:01.362720 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:01.364538 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:01.366690 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:01.368008 | orchestrator | 2026-03-29 01:05:01 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:01.368066 | orchestrator | 2026-03-29 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:04.398576 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:04.399802 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:04.401718 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:04.404279 | orchestrator | 2026-03-29 01:05:04 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:04.404379 | orchestrator | 2026-03-29 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:07.434912 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:07.435144 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:07.436089 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:07.436801 | orchestrator | 2026-03-29 01:05:07 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:07.436946 | orchestrator | 2026-03-29 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:10.472170 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:10.472227 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:10.473393 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:10.474745 | orchestrator | 2026-03-29 01:05:10 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:10.474790 | orchestrator | 2026-03-29 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:13.510129 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:13.511095 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:13.511820 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:13.512777 | orchestrator | 2026-03-29 01:05:13 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:13.512810 | orchestrator | 2026-03-29 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:16.545741 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:16.546920 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:16.547519 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:16.548361 | orchestrator | 2026-03-29 01:05:16 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:16.548384 | orchestrator | 2026-03-29 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:19.576683 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:19.578125 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:19.579306 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:19.580536 | orchestrator | 2026-03-29 01:05:19 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:19.580573 | orchestrator | 2026-03-29 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:22.605825 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:22.607975 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:22.608939 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:22.609149 | orchestrator | 2026-03-29 01:05:22 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:22.609452 | orchestrator | 2026-03-29 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:25.634341 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:25.634418 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:25.634745 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:25.635491 | orchestrator | 2026-03-29 01:05:25 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:25.635523 | orchestrator | 2026-03-29 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:28.656402 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:28.658006 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:28.658594 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:28.660545 | orchestrator | 2026-03-29 01:05:28 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:28.660584 | orchestrator | 2026-03-29 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:31.693191 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:31.694357 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:31.695240 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state STARTED 2026-03-29 01:05:31.696764 | orchestrator | 2026-03-29 01:05:31 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:31.697438 | orchestrator | 2026-03-29 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:34.725055 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:34.727346 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:34.729681 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:34.733544 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task 6240e7d5-bb92-485b-bc36-a860fd264a69 is in state SUCCESS 2026-03-29 01:05:34.735302 | orchestrator | 2026-03-29 01:05:34.735361 | orchestrator | 2026-03-29 01:05:34.735371 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:05:34.735378 | orchestrator | 2026-03-29 01:05:34.735385 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:05:34.735402 | orchestrator | Sunday 29 March 2026 01:04:52 +0000 (0:00:00.206) 0:00:00.206 ********** 2026-03-29 01:05:34.735408 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:05:34.735416 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:05:34.735422 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:05:34.735428 | orchestrator | 2026-03-29 01:05:34.735497 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:05:34.735505 | orchestrator | Sunday 29 March 2026 01:04:52 +0000 (0:00:00.267) 0:00:00.474 ********** 2026-03-29 01:05:34.735510 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-29 01:05:34.735516 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-29 01:05:34.735521 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-29 01:05:34.735526 | orchestrator | 2026-03-29 01:05:34.735544 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-29 01:05:34.735549 | orchestrator | 2026-03-29 01:05:34.735554 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-29 01:05:34.735558 | orchestrator | Sunday 29 March 2026 01:04:53 +0000 (0:00:00.827) 0:00:01.302 ********** 2026-03-29 01:05:34.735563 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:05:34.735568 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:05:34.735573 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:05:34.735578 | orchestrator | 2026-03-29 01:05:34.735583 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:05:34.735589 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:05:34.735595 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:05:34.735600 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:05:34.735605 | orchestrator | 2026-03-29 01:05:34.735609 | orchestrator | 2026-03-29 01:05:34.735614 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:05:34.735619 | orchestrator | Sunday 29 March 2026 01:04:54 +0000 (0:00:00.566) 0:00:01.869 ********** 2026-03-29 01:05:34.735624 | orchestrator | =============================================================================== 2026-03-29 01:05:34.735629 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-03-29 01:05:34.735635 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.57s 2026-03-29 01:05:34.735640 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-03-29 01:05:34.735645 | orchestrator | 2026-03-29 01:05:34.735660 | orchestrator | 2026-03-29 01:05:34.735671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:05:34.735676 | orchestrator | 2026-03-29 01:05:34.735682 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:05:34.735687 | orchestrator | Sunday 29 March 2026 01:02:36 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-03-29 01:05:34.735692 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:05:34.735697 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:05:34.735702 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:05:34.735708 | orchestrator | 2026-03-29 01:05:34.735750 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:05:34.735758 | orchestrator | Sunday 29 March 2026 01:02:36 +0000 (0:00:00.288) 0:00:00.526 ********** 2026-03-29 01:05:34.735764 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-29 01:05:34.735769 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-29 01:05:34.735775 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-29 01:05:34.735781 | orchestrator | 2026-03-29 01:05:34.736086 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-29 01:05:34.736334 | orchestrator | 2026-03-29 01:05:34.736343 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:05:34.736389 | orchestrator | Sunday 29 March 2026 01:02:37 +0000 (0:00:00.422) 0:00:00.948 ********** 2026-03-29 01:05:34.736395 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:05:34.736401 | orchestrator | 2026-03-29 01:05:34.736407 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-29 01:05:34.736413 | orchestrator | Sunday 29 March 2026 01:02:37 +0000 (0:00:00.496) 0:00:01.444 ********** 2026-03-29 01:05:34.736419 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-29 01:05:34.736425 | orchestrator | 2026-03-29 01:05:34.736431 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-29 01:05:34.736437 | orchestrator | Sunday 29 March 2026 01:02:41 +0000 (0:00:03.414) 0:00:04.858 ********** 2026-03-29 01:05:34.736455 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-29 01:05:34.736461 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-29 01:05:34.736466 | orchestrator | 2026-03-29 01:05:34.736472 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-29 01:05:34.736477 | orchestrator | Sunday 29 March 2026 01:02:47 +0000 (0:00:06.500) 0:00:11.358 ********** 2026-03-29 01:05:34.736483 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-29 01:05:34.736488 | orchestrator | 2026-03-29 01:05:34.736493 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-29 01:05:34.736499 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:04.268) 0:00:15.627 ********** 2026-03-29 01:05:34.736531 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:05:34.736538 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-29 01:05:34.736543 | orchestrator | 2026-03-29 01:05:34.736549 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-29 01:05:34.736561 | orchestrator | Sunday 29 March 2026 01:02:55 +0000 (0:00:03.812) 0:00:19.439 ********** 2026-03-29 01:05:34.736566 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:05:34.736571 | orchestrator | 2026-03-29 01:05:34.736577 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-29 01:05:34.736582 | orchestrator | Sunday 29 March 2026 01:02:59 +0000 (0:00:03.400) 0:00:22.840 ********** 2026-03-29 01:05:34.736588 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-29 01:05:34.736593 | orchestrator | 2026-03-29 01:05:34.736599 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-29 01:05:34.736604 | orchestrator | Sunday 29 March 2026 01:03:02 +0000 (0:00:03.669) 0:00:26.509 ********** 2026-03-29 01:05:34.736611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.736619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.736625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.736637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.736995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737065 | orchestrator | 2026-03-29 01:05:34.737072 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-29 01:05:34.737082 | orchestrator | Sunday 29 March 2026 01:03:05 +0000 (0:00:03.017) 0:00:29.527 ********** 2026-03-29 01:05:34.737090 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:34.737096 | orchestrator | 2026-03-29 01:05:34.737102 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-29 01:05:34.737108 | orchestrator | Sunday 29 March 2026 01:03:06 +0000 (0:00:00.137) 0:00:29.665 ********** 2026-03-29 01:05:34.737117 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:34.737122 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:34.737130 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:34.737136 | orchestrator | 2026-03-29 01:05:34.737142 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:05:34.737148 | orchestrator | Sunday 29 March 2026 01:03:06 +0000 (0:00:00.280) 0:00:29.945 ********** 2026-03-29 01:05:34.737156 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:05:34.737163 | orchestrator | 2026-03-29 01:05:34.737170 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-29 01:05:34.737177 | orchestrator | Sunday 29 March 2026 01:03:07 +0000 (0:00:00.625) 0:00:30.571 ********** 2026-03-29 01:05:34.737184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737369 | orchestrator | 2026-03-29 01:05:34.737375 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-29 01:05:34.737381 | orchestrator | Sunday 29 March 2026 01:03:13 +0000 (0:00:06.084) 0:00:36.655 ********** 2026-03-29 01:05:34.737388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.737398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.737404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737449 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:34.737455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.737462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.737468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737511 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:34.737517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.737523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.737528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737574 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:34.737580 | orchestrator | 2026-03-29 01:05:34.737585 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-29 01:05:34.737591 | orchestrator | Sunday 29 March 2026 01:03:14 +0000 (0:00:01.794) 0:00:38.449 ********** 2026-03-29 01:05:34.737597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.737603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.737608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737651 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:34.737655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.737659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.737662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737695 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:34.737699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.737702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.737706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.737737 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:34.737741 | orchestrator | 2026-03-29 01:05:34.737744 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-29 01:05:34.737748 | orchestrator | Sunday 29 March 2026 01:03:16 +0000 (0:00:01.658) 0:00:40.108 ********** 2026-03-29 01:05:34.737752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737907 | orchestrator | 2026-03-29 01:05:34.737911 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-29 01:05:34.737915 | orchestrator | Sunday 29 March 2026 01:03:22 +0000 (0:00:06.228) 0:00:46.336 ********** 2026-03-29 01:05:34.737919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.737954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.737993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738166 | orchestrator | 2026-03-29 01:05:34.738171 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-29 01:05:34.738176 | orchestrator | Sunday 29 March 2026 01:03:41 +0000 (0:00:18.990) 0:01:05.326 ********** 2026-03-29 01:05:34.738181 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 01:05:34.738189 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 01:05:34.738196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-29 01:05:34.738201 | orchestrator | 2026-03-29 01:05:34.738206 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-29 01:05:34.738212 | orchestrator | Sunday 29 March 2026 01:03:48 +0000 (0:00:06.467) 0:01:11.793 ********** 2026-03-29 01:05:34.738217 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 01:05:34.738222 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 01:05:34.738227 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-29 01:05:34.738232 | orchestrator | 2026-03-29 01:05:34.738237 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-29 01:05:34.738243 | orchestrator | Sunday 29 March 2026 01:03:51 +0000 (0:00:03.045) 0:01:14.839 ********** 2026-03-29 01:05:34.738249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738354 | orchestrator | 2026-03-29 01:05:34.738358 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-29 01:05:34.738362 | orchestrator | Sunday 29 March 2026 01:03:54 +0000 (0:00:03.326) 0:01:18.165 ********** 2026-03-29 01:05:34.738365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738459 | orchestrator | 2026-03-29 01:05:34.738463 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:05:34.738469 | orchestrator | Sunday 29 March 2026 01:03:57 +0000 (0:00:02.455) 0:01:20.621 ********** 2026-03-29 01:05:34.738473 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:34.738477 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:34.738481 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:34.738485 | orchestrator | 2026-03-29 01:05:34.738489 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-29 01:05:34.738492 | orchestrator | Sunday 29 March 2026 01:03:57 +0000 (0:00:00.408) 0:01:21.029 ********** 2026-03-29 01:05:34.738496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.738505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738528 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:34.738532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.738539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738559 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:34.738562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-29 01:05:34.738566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-29 01:05:34.738569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:05:34.738590 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:34.738593 | orchestrator | 2026-03-29 01:05:34.738597 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-29 01:05:34.738600 | orchestrator | Sunday 29 March 2026 01:03:58 +0000 (0:00:01.381) 0:01:22.411 ********** 2026-03-29 01:05:34.738603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.738607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.738613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-29 01:05:34.738618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:05:34.738687 | orchestrator | 2026-03-29 01:05:34.738690 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-29 01:05:34.738694 | orchestrator | Sunday 29 March 2026 01:04:03 +0000 (0:00:04.711) 0:01:27.122 ********** 2026-03-29 01:05:34.738697 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:05:34.738700 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:05:34.738703 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:05:34.738706 | orchestrator | 2026-03-29 01:05:34.738710 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-29 01:05:34.738713 | orchestrator | Sunday 29 March 2026 01:04:04 +0000 (0:00:00.568) 0:01:27.691 ********** 2026-03-29 01:05:34.738716 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-29 01:05:34.738719 | orchestrator | 2026-03-29 01:05:34.738722 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-29 01:05:34.738725 | orchestrator | Sunday 29 March 2026 01:04:06 +0000 (0:00:02.644) 0:01:30.336 ********** 2026-03-29 01:05:34.738729 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 01:05:34.738732 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-29 01:05:34.738735 | orchestrator | 2026-03-29 01:05:34.738741 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-29 01:05:34.738746 | orchestrator | Sunday 29 March 2026 01:04:09 +0000 (0:00:02.486) 0:01:32.823 ********** 2026-03-29 01:05:34.738750 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.738759 | orchestrator | 2026-03-29 01:05:34.738764 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 01:05:34.738769 | orchestrator | Sunday 29 March 2026 01:04:26 +0000 (0:00:17.395) 0:01:50.218 ********** 2026-03-29 01:05:34.738774 | orchestrator | 2026-03-29 01:05:34.738779 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 01:05:34.738784 | orchestrator | Sunday 29 March 2026 01:04:26 +0000 (0:00:00.059) 0:01:50.277 ********** 2026-03-29 01:05:34.738788 | orchestrator | 2026-03-29 01:05:34.738794 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-29 01:05:34.738798 | orchestrator | Sunday 29 March 2026 01:04:26 +0000 (0:00:00.061) 0:01:50.339 ********** 2026-03-29 01:05:34.738803 | orchestrator | 2026-03-29 01:05:34.738809 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-29 01:05:34.738814 | orchestrator | Sunday 29 March 2026 01:04:26 +0000 (0:00:00.062) 0:01:50.401 ********** 2026-03-29 01:05:34.738819 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.738824 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:34.738835 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:34.738840 | orchestrator | 2026-03-29 01:05:34.738845 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-29 01:05:34.738851 | orchestrator | Sunday 29 March 2026 01:04:35 +0000 (0:00:08.520) 0:01:58.921 ********** 2026-03-29 01:05:34.738856 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.738862 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:34.738867 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:34.738872 | orchestrator | 2026-03-29 01:05:34.738877 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-29 01:05:34.738883 | orchestrator | Sunday 29 March 2026 01:04:47 +0000 (0:00:12.118) 0:02:11.041 ********** 2026-03-29 01:05:34.738889 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.738894 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:34.738899 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:34.738905 | orchestrator | 2026-03-29 01:05:34.738910 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-29 01:05:34.738915 | orchestrator | Sunday 29 March 2026 01:04:58 +0000 (0:00:10.712) 0:02:21.753 ********** 2026-03-29 01:05:34.738920 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:34.738937 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:34.738943 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.738948 | orchestrator | 2026-03-29 01:05:34.738952 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-29 01:05:34.738957 | orchestrator | Sunday 29 March 2026 01:05:06 +0000 (0:00:08.174) 0:02:29.928 ********** 2026-03-29 01:05:34.738962 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:34.738967 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:34.738971 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.738976 | orchestrator | 2026-03-29 01:05:34.738981 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-29 01:05:34.738990 | orchestrator | Sunday 29 March 2026 01:05:15 +0000 (0:00:09.573) 0:02:39.502 ********** 2026-03-29 01:05:34.738995 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:05:34.739001 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:05:34.739005 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.739008 | orchestrator | 2026-03-29 01:05:34.739011 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-29 01:05:34.739018 | orchestrator | Sunday 29 March 2026 01:05:24 +0000 (0:00:08.951) 0:02:48.453 ********** 2026-03-29 01:05:34.739022 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:05:34.739025 | orchestrator | 2026-03-29 01:05:34.739028 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:05:34.739031 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:05:34.739035 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:05:34.739039 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:05:34.739044 | orchestrator | 2026-03-29 01:05:34.739049 | orchestrator | 2026-03-29 01:05:34.739057 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:05:34.739064 | orchestrator | Sunday 29 March 2026 01:05:33 +0000 (0:00:08.296) 0:02:56.750 ********** 2026-03-29 01:05:34.739068 | orchestrator | =============================================================================== 2026-03-29 01:05:34.739073 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.99s 2026-03-29 01:05:34.739079 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.40s 2026-03-29 01:05:34.739084 | orchestrator | designate : Restart designate-api container ---------------------------- 12.12s 2026-03-29 01:05:34.739095 | orchestrator | designate : Restart designate-central container ------------------------ 10.71s 2026-03-29 01:05:34.739099 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.57s 2026-03-29 01:05:34.739105 | orchestrator | designate : Restart designate-worker container -------------------------- 8.95s 2026-03-29 01:05:34.739110 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.52s 2026-03-29 01:05:34.739118 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.30s 2026-03-29 01:05:34.739125 | orchestrator | designate : Restart designate-producer container ------------------------ 8.17s 2026-03-29 01:05:34.739130 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.50s 2026-03-29 01:05:34.739135 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.47s 2026-03-29 01:05:34.739140 | orchestrator | designate : Copying over config.json files for services ----------------- 6.23s 2026-03-29 01:05:34.739146 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.08s 2026-03-29 01:05:34.739151 | orchestrator | designate : Check designate containers ---------------------------------- 4.71s 2026-03-29 01:05:34.739157 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.27s 2026-03-29 01:05:34.739162 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.81s 2026-03-29 01:05:34.739167 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.67s 2026-03-29 01:05:34.739172 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.41s 2026-03-29 01:05:34.739180 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.40s 2026-03-29 01:05:34.739185 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.33s 2026-03-29 01:05:34.739190 | orchestrator | 2026-03-29 01:05:34 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:34.739195 | orchestrator | 2026-03-29 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:37.824651 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:37.824706 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:37.824713 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:37.824718 | orchestrator | 2026-03-29 01:05:37 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:37.824724 | orchestrator | 2026-03-29 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:40.796734 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:40.798640 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:40.800011 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:40.802046 | orchestrator | 2026-03-29 01:05:40 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:40.802095 | orchestrator | 2026-03-29 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:43.834652 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:43.835149 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:43.835920 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:43.836470 | orchestrator | 2026-03-29 01:05:43 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:43.836513 | orchestrator | 2026-03-29 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:46.904787 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:46.904933 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:46.904945 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:46.904965 | orchestrator | 2026-03-29 01:05:46 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:46.904976 | orchestrator | 2026-03-29 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:49.991351 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:49.991764 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:49.992292 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:49.992803 | orchestrator | 2026-03-29 01:05:49 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:49.992823 | orchestrator | 2026-03-29 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:53.037747 | orchestrator | 2026-03-29 01:05:53 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:53.037808 | orchestrator | 2026-03-29 01:05:53 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:53.039452 | orchestrator | 2026-03-29 01:05:53 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:53.039872 | orchestrator | 2026-03-29 01:05:53 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:53.039892 | orchestrator | 2026-03-29 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:56.079705 | orchestrator | 2026-03-29 01:05:56 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:56.080754 | orchestrator | 2026-03-29 01:05:56 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:56.081765 | orchestrator | 2026-03-29 01:05:56 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:56.082941 | orchestrator | 2026-03-29 01:05:56 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:56.082969 | orchestrator | 2026-03-29 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:05:59.138967 | orchestrator | 2026-03-29 01:05:59 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:05:59.140289 | orchestrator | 2026-03-29 01:05:59 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:05:59.140557 | orchestrator | 2026-03-29 01:05:59 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:05:59.141629 | orchestrator | 2026-03-29 01:05:59 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:05:59.141654 | orchestrator | 2026-03-29 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:02.173617 | orchestrator | 2026-03-29 01:06:02 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:06:02.173796 | orchestrator | 2026-03-29 01:06:02 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:02.174771 | orchestrator | 2026-03-29 01:06:02 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:02.175540 | orchestrator | 2026-03-29 01:06:02 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:02.175660 | orchestrator | 2026-03-29 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:05.221054 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state STARTED 2026-03-29 01:06:05.222852 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:05.224039 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:05.224957 | orchestrator | 2026-03-29 01:06:05 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:05.225282 | orchestrator | 2026-03-29 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:08.263256 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task d81f6924-ae1b-4fec-aa34-d24e227b40e0 is in state SUCCESS 2026-03-29 01:06:08.263806 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:08.265867 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:08.266524 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:08.267312 | orchestrator | 2026-03-29 01:06:08 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:08.267335 | orchestrator | 2026-03-29 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:11.312125 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:11.312189 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:11.312196 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:11.314898 | orchestrator | 2026-03-29 01:06:11 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:11.314952 | orchestrator | 2026-03-29 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:14.337157 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:14.337216 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:14.338109 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:14.338792 | orchestrator | 2026-03-29 01:06:14 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:14.338817 | orchestrator | 2026-03-29 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:17.374551 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:17.374613 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:17.375336 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:17.376018 | orchestrator | 2026-03-29 01:06:17 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:17.376061 | orchestrator | 2026-03-29 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:20.403721 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:20.404745 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:20.405493 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:20.406585 | orchestrator | 2026-03-29 01:06:20 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:20.407125 | orchestrator | 2026-03-29 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:23.451279 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:23.451943 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:23.454443 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:23.455107 | orchestrator | 2026-03-29 01:06:23 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:23.455133 | orchestrator | 2026-03-29 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:26.509351 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:26.510938 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:26.513134 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:26.515065 | orchestrator | 2026-03-29 01:06:26 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:26.516057 | orchestrator | 2026-03-29 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:29.565659 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:29.568197 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:29.570433 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state STARTED 2026-03-29 01:06:29.572271 | orchestrator | 2026-03-29 01:06:29 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:29.572328 | orchestrator | 2026-03-29 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:32.621961 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:32.624370 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:32.628537 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task 709e473c-ed8c-4855-8f23-1d170404af1e is in state SUCCESS 2026-03-29 01:06:32.629547 | orchestrator | 2026-03-29 01:06:32.629587 | orchestrator | 2026-03-29 01:06:32.629593 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:06:32.629597 | orchestrator | 2026-03-29 01:06:32.629601 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:06:32.629604 | orchestrator | Sunday 29 March 2026 01:05:37 +0000 (0:00:00.510) 0:00:00.510 ********** 2026-03-29 01:06:32.629608 | orchestrator | ok: [testbed-manager] 2026-03-29 01:06:32.629612 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:06:32.629615 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:06:32.629618 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:06:32.629622 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:06:32.629637 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:06:32.629641 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:06:32.629644 | orchestrator | 2026-03-29 01:06:32.629647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:06:32.629651 | orchestrator | Sunday 29 March 2026 01:05:38 +0000 (0:00:00.915) 0:00:01.426 ********** 2026-03-29 01:06:32.629656 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-29 01:06:32.629662 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-29 01:06:32.629667 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-29 01:06:32.629672 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-29 01:06:32.629677 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-29 01:06:32.629682 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-29 01:06:32.629686 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-29 01:06:32.629691 | orchestrator | 2026-03-29 01:06:32.629697 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-29 01:06:32.629702 | orchestrator | 2026-03-29 01:06:32.629707 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-29 01:06:32.629712 | orchestrator | Sunday 29 March 2026 01:05:39 +0000 (0:00:00.719) 0:00:02.145 ********** 2026-03-29 01:06:32.629718 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:06:32.629725 | orchestrator | 2026-03-29 01:06:32.629730 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-29 01:06:32.629735 | orchestrator | Sunday 29 March 2026 01:05:40 +0000 (0:00:01.470) 0:00:03.615 ********** 2026-03-29 01:06:32.629740 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-29 01:06:32.629745 | orchestrator | 2026-03-29 01:06:32.629750 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-29 01:06:32.629755 | orchestrator | Sunday 29 March 2026 01:05:44 +0000 (0:00:03.547) 0:00:07.163 ********** 2026-03-29 01:06:32.629760 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-29 01:06:32.629766 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-29 01:06:32.629771 | orchestrator | 2026-03-29 01:06:32.629776 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-29 01:06:32.629780 | orchestrator | Sunday 29 March 2026 01:05:50 +0000 (0:00:05.813) 0:00:12.977 ********** 2026-03-29 01:06:32.629785 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-29 01:06:32.629790 | orchestrator | 2026-03-29 01:06:32.629795 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-29 01:06:32.629799 | orchestrator | Sunday 29 March 2026 01:05:52 +0000 (0:00:02.409) 0:00:15.386 ********** 2026-03-29 01:06:32.629804 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:06:32.629810 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-29 01:06:32.629815 | orchestrator | 2026-03-29 01:06:32.629820 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-29 01:06:32.629826 | orchestrator | Sunday 29 March 2026 01:05:56 +0000 (0:00:03.360) 0:00:18.747 ********** 2026-03-29 01:06:32.629830 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-29 01:06:32.629833 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-29 01:06:32.629836 | orchestrator | 2026-03-29 01:06:32.629853 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-29 01:06:32.629859 | orchestrator | Sunday 29 March 2026 01:06:01 +0000 (0:00:05.221) 0:00:23.968 ********** 2026-03-29 01:06:32.629864 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-29 01:06:32.629875 | orchestrator | 2026-03-29 01:06:32.629880 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:06:32.629886 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:06:32.629891 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:06:32.629896 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:06:32.629901 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:06:32.629907 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:06:32.629923 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:06:32.629928 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:06:32.629933 | orchestrator | 2026-03-29 01:06:32.629938 | orchestrator | 2026-03-29 01:06:32.629944 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:06:32.629949 | orchestrator | Sunday 29 March 2026 01:06:05 +0000 (0:00:04.000) 0:00:27.968 ********** 2026-03-29 01:06:32.629954 | orchestrator | =============================================================================== 2026-03-29 01:06:32.629959 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.81s 2026-03-29 01:06:32.629965 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.22s 2026-03-29 01:06:32.630068 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.00s 2026-03-29 01:06:32.630078 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.55s 2026-03-29 01:06:32.630084 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.36s 2026-03-29 01:06:32.630089 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.41s 2026-03-29 01:06:32.630095 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.47s 2026-03-29 01:06:32.630100 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.92s 2026-03-29 01:06:32.630105 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-03-29 01:06:32.630110 | orchestrator | 2026-03-29 01:06:32.630115 | orchestrator | 2026-03-29 01:06:32.630120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:06:32.630125 | orchestrator | 2026-03-29 01:06:32.630130 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:06:32.630135 | orchestrator | Sunday 29 March 2026 01:04:44 +0000 (0:00:00.305) 0:00:00.305 ********** 2026-03-29 01:06:32.630140 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:06:32.630146 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:06:32.630151 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:06:32.630156 | orchestrator | 2026-03-29 01:06:32.630162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:06:32.630167 | orchestrator | Sunday 29 March 2026 01:04:45 +0000 (0:00:00.357) 0:00:00.663 ********** 2026-03-29 01:06:32.630172 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-29 01:06:32.630177 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-29 01:06:32.630182 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-29 01:06:32.630187 | orchestrator | 2026-03-29 01:06:32.630192 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-29 01:06:32.630198 | orchestrator | 2026-03-29 01:06:32.630203 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 01:06:32.630215 | orchestrator | Sunday 29 March 2026 01:04:45 +0000 (0:00:00.562) 0:00:01.225 ********** 2026-03-29 01:06:32.630220 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:06:32.630225 | orchestrator | 2026-03-29 01:06:32.630231 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-29 01:06:32.630236 | orchestrator | Sunday 29 March 2026 01:04:46 +0000 (0:00:00.682) 0:00:01.907 ********** 2026-03-29 01:06:32.630241 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-29 01:06:32.630246 | orchestrator | 2026-03-29 01:06:32.630251 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-29 01:06:32.630256 | orchestrator | Sunday 29 March 2026 01:04:49 +0000 (0:00:03.083) 0:00:04.991 ********** 2026-03-29 01:06:32.630261 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-29 01:06:32.630266 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-29 01:06:32.630271 | orchestrator | 2026-03-29 01:06:32.630274 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-29 01:06:32.630277 | orchestrator | Sunday 29 March 2026 01:04:55 +0000 (0:00:06.042) 0:00:11.033 ********** 2026-03-29 01:06:32.630280 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:06:32.630283 | orchestrator | 2026-03-29 01:06:32.630286 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-29 01:06:32.630290 | orchestrator | Sunday 29 March 2026 01:04:58 +0000 (0:00:02.956) 0:00:13.990 ********** 2026-03-29 01:06:32.630293 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:06:32.630296 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-29 01:06:32.630299 | orchestrator | 2026-03-29 01:06:32.630302 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-29 01:06:32.630305 | orchestrator | Sunday 29 March 2026 01:05:01 +0000 (0:00:03.240) 0:00:17.230 ********** 2026-03-29 01:06:32.630308 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:06:32.630311 | orchestrator | 2026-03-29 01:06:32.630314 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-29 01:06:32.630318 | orchestrator | Sunday 29 March 2026 01:05:05 +0000 (0:00:03.229) 0:00:20.460 ********** 2026-03-29 01:06:32.630321 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-29 01:06:32.630324 | orchestrator | 2026-03-29 01:06:32.630327 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-29 01:06:32.630330 | orchestrator | Sunday 29 March 2026 01:05:08 +0000 (0:00:03.767) 0:00:24.227 ********** 2026-03-29 01:06:32.630333 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.630336 | orchestrator | 2026-03-29 01:06:32.630339 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-29 01:06:32.630347 | orchestrator | Sunday 29 March 2026 01:05:12 +0000 (0:00:03.455) 0:00:27.683 ********** 2026-03-29 01:06:32.630350 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.630353 | orchestrator | 2026-03-29 01:06:32.630356 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-29 01:06:32.630359 | orchestrator | Sunday 29 March 2026 01:05:15 +0000 (0:00:03.497) 0:00:31.181 ********** 2026-03-29 01:06:32.630362 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.630365 | orchestrator | 2026-03-29 01:06:32.630369 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-29 01:06:32.630372 | orchestrator | Sunday 29 March 2026 01:05:19 +0000 (0:00:03.633) 0:00:34.814 ********** 2026-03-29 01:06:32.630376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630409 | orchestrator | 2026-03-29 01:06:32.630412 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-29 01:06:32.630415 | orchestrator | Sunday 29 March 2026 01:05:20 +0000 (0:00:01.367) 0:00:36.181 ********** 2026-03-29 01:06:32.630419 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:32.630422 | orchestrator | 2026-03-29 01:06:32.630425 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-29 01:06:32.630428 | orchestrator | Sunday 29 March 2026 01:05:21 +0000 (0:00:00.180) 0:00:36.362 ********** 2026-03-29 01:06:32.630431 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:32.630434 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:32.630437 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:32.630440 | orchestrator | 2026-03-29 01:06:32.630444 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-29 01:06:32.630447 | orchestrator | Sunday 29 March 2026 01:05:21 +0000 (0:00:00.867) 0:00:37.229 ********** 2026-03-29 01:06:32.630450 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:06:32.630453 | orchestrator | 2026-03-29 01:06:32.630456 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-29 01:06:32.630459 | orchestrator | Sunday 29 March 2026 01:05:23 +0000 (0:00:01.427) 0:00:38.657 ********** 2026-03-29 01:06:32.630463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630489 | orchestrator | 2026-03-29 01:06:32.630493 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-29 01:06:32.630496 | orchestrator | Sunday 29 March 2026 01:05:26 +0000 (0:00:03.408) 0:00:42.065 ********** 2026-03-29 01:06:32.630499 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:06:32.630502 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:06:32.630505 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:06:32.630508 | orchestrator | 2026-03-29 01:06:32.630512 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 01:06:32.630515 | orchestrator | Sunday 29 March 2026 01:05:27 +0000 (0:00:00.624) 0:00:42.690 ********** 2026-03-29 01:06:32.630518 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:06:32.630521 | orchestrator | 2026-03-29 01:06:32.630524 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-29 01:06:32.630530 | orchestrator | Sunday 29 March 2026 01:05:28 +0000 (0:00:01.052) 0:00:43.743 ********** 2026-03-29 01:06:32.630540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630580 | orchestrator | 2026-03-29 01:06:32.630585 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-29 01:06:32.630589 | orchestrator | Sunday 29 March 2026 01:05:31 +0000 (0:00:02.775) 0:00:46.519 ********** 2026-03-29 01:06:32.630595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630607 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:32.630612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630630 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:32.630636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630647 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:32.630653 | orchestrator | 2026-03-29 01:06:32.630658 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-29 01:06:32.630663 | orchestrator | Sunday 29 March 2026 01:05:32 +0000 (0:00:01.418) 0:00:47.937 ********** 2026-03-29 01:06:32.630668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630683 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:32.630692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630704 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:32.630710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630723 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:32.630726 | orchestrator | 2026-03-29 01:06:32.630729 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-29 01:06:32.630732 | orchestrator | Sunday 29 March 2026 01:05:34 +0000 (0:00:01.392) 0:00:49.330 ********** 2026-03-29 01:06:32.630736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630774 | orchestrator | 2026-03-29 01:06:32.630782 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-29 01:06:32.630786 | orchestrator | Sunday 29 March 2026 01:05:36 +0000 (0:00:02.089) 0:00:51.420 ********** 2026-03-29 01:06:32.630791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630829 | orchestrator | 2026-03-29 01:06:32.630834 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-29 01:06:32.630870 | orchestrator | Sunday 29 March 2026 01:05:41 +0000 (0:00:04.982) 0:00:56.402 ********** 2026-03-29 01:06:32.630878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630898 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:32.630903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630914 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:32.630917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-29 01:06:32.630920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:06:32.630927 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:32.630930 | orchestrator | 2026-03-29 01:06:32.630933 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-29 01:06:32.630937 | orchestrator | Sunday 29 March 2026 01:05:41 +0000 (0:00:00.584) 0:00:56.987 ********** 2026-03-29 01:06:32.630946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-29 01:06:32.630959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:06:32.630973 | orchestrator | 2026-03-29 01:06:32.630976 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-29 01:06:32.630980 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:01.906) 0:00:58.893 ********** 2026-03-29 01:06:32.630983 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:06:32.630986 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:06:32.630989 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:06:32.630992 | orchestrator | 2026-03-29 01:06:32.630996 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-29 01:06:32.630999 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:00.411) 0:00:59.304 ********** 2026-03-29 01:06:32.631002 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.631005 | orchestrator | 2026-03-29 01:06:32.631008 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-29 01:06:32.631011 | orchestrator | Sunday 29 March 2026 01:05:46 +0000 (0:00:02.191) 0:01:01.496 ********** 2026-03-29 01:06:32.631015 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.631018 | orchestrator | 2026-03-29 01:06:32.631021 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-29 01:06:32.631026 | orchestrator | Sunday 29 March 2026 01:05:48 +0000 (0:00:02.355) 0:01:03.852 ********** 2026-03-29 01:06:32.631029 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.631032 | orchestrator | 2026-03-29 01:06:32.631042 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 01:06:32.631045 | orchestrator | Sunday 29 March 2026 01:06:05 +0000 (0:00:17.112) 0:01:20.964 ********** 2026-03-29 01:06:32.631053 | orchestrator | 2026-03-29 01:06:32.631124 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 01:06:32.631130 | orchestrator | Sunday 29 March 2026 01:06:05 +0000 (0:00:00.157) 0:01:21.121 ********** 2026-03-29 01:06:32.631133 | orchestrator | 2026-03-29 01:06:32.631137 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-29 01:06:32.631140 | orchestrator | Sunday 29 March 2026 01:06:05 +0000 (0:00:00.067) 0:01:21.188 ********** 2026-03-29 01:06:32.631143 | orchestrator | 2026-03-29 01:06:32.631146 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-29 01:06:32.631149 | orchestrator | Sunday 29 March 2026 01:06:05 +0000 (0:00:00.091) 0:01:21.280 ********** 2026-03-29 01:06:32.631157 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.631160 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:32.631196 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:32.631201 | orchestrator | 2026-03-29 01:06:32.631204 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-29 01:06:32.631209 | orchestrator | Sunday 29 March 2026 01:06:21 +0000 (0:00:15.732) 0:01:37.012 ********** 2026-03-29 01:06:32.631214 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:06:32.631220 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:06:32.631227 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:06:32.631232 | orchestrator | 2026-03-29 01:06:32.631238 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:06:32.631244 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-29 01:06:32.631251 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:06:32.631256 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:06:32.631262 | orchestrator | 2026-03-29 01:06:32.631265 | orchestrator | 2026-03-29 01:06:32.631269 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:06:32.631272 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:09.696) 0:01:46.709 ********** 2026-03-29 01:06:32.631275 | orchestrator | =============================================================================== 2026-03-29 01:06:32.631278 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.11s 2026-03-29 01:06:32.631281 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.73s 2026-03-29 01:06:32.631284 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.70s 2026-03-29 01:06:32.631287 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.04s 2026-03-29 01:06:32.631291 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.98s 2026-03-29 01:06:32.631294 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.77s 2026-03-29 01:06:32.631297 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.63s 2026-03-29 01:06:32.631300 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.50s 2026-03-29 01:06:32.631303 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.46s 2026-03-29 01:06:32.631306 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.41s 2026-03-29 01:06:32.631309 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.24s 2026-03-29 01:06:32.631315 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.23s 2026-03-29 01:06:32.631319 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.08s 2026-03-29 01:06:32.631322 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 2.96s 2026-03-29 01:06:32.631325 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.78s 2026-03-29 01:06:32.631328 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2026-03-29 01:06:32.631331 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2026-03-29 01:06:32.631334 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.09s 2026-03-29 01:06:32.631337 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.91s 2026-03-29 01:06:32.631340 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 1.43s 2026-03-29 01:06:32.631347 | orchestrator | 2026-03-29 01:06:32 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:32.631356 | orchestrator | 2026-03-29 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:35.691596 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:35.692957 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:35.694378 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:35.696684 | orchestrator | 2026-03-29 01:06:35 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:35.696728 | orchestrator | 2026-03-29 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:38.741093 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:38.741136 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:38.742184 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:38.743220 | orchestrator | 2026-03-29 01:06:38 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:38.743250 | orchestrator | 2026-03-29 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:41.776537 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:41.777551 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:41.778505 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:41.779449 | orchestrator | 2026-03-29 01:06:41 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:41.779687 | orchestrator | 2026-03-29 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:44.811657 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:44.811722 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:44.812996 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:44.814503 | orchestrator | 2026-03-29 01:06:44 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:44.816255 | orchestrator | 2026-03-29 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:47.843038 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:47.843712 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:47.844624 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:47.845349 | orchestrator | 2026-03-29 01:06:47 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:47.845509 | orchestrator | 2026-03-29 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:50.901617 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:50.902085 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:50.902610 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:50.904308 | orchestrator | 2026-03-29 01:06:50 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:50.904334 | orchestrator | 2026-03-29 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:53.947439 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:53.948154 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:53.948922 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:53.950635 | orchestrator | 2026-03-29 01:06:53 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:53.950668 | orchestrator | 2026-03-29 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:06:56.984254 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:06:56.984390 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:06:56.987158 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:06:56.987643 | orchestrator | 2026-03-29 01:06:56 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:06:56.987669 | orchestrator | 2026-03-29 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:00.026286 | orchestrator | 2026-03-29 01:07:00 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:00.028900 | orchestrator | 2026-03-29 01:07:00 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:00.029723 | orchestrator | 2026-03-29 01:07:00 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state STARTED 2026-03-29 01:07:00.030502 | orchestrator | 2026-03-29 01:07:00 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:00.030526 | orchestrator | 2026-03-29 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:03.050835 | orchestrator | 2026-03-29 01:07:03 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:03.051390 | orchestrator | 2026-03-29 01:07:03 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:03.053059 | orchestrator | 2026-03-29 01:07:03 | INFO  | Task 9972b157-3864-45b5-8ddb-ff8700484450 is in state SUCCESS 2026-03-29 01:07:03.053160 | orchestrator | 2026-03-29 01:07:03.054263 | orchestrator | 2026-03-29 01:07:03.054296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:07:03.054302 | orchestrator | 2026-03-29 01:07:03.054327 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:07:03.054333 | orchestrator | Sunday 29 March 2026 01:02:36 +0000 (0:00:00.236) 0:00:00.236 ********** 2026-03-29 01:07:03.054337 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:03.054342 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:03.054346 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:03.054350 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:03.054354 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:03.054358 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:03.054384 | orchestrator | 2026-03-29 01:07:03.054410 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:07:03.054415 | orchestrator | Sunday 29 March 2026 01:02:37 +0000 (0:00:00.662) 0:00:00.899 ********** 2026-03-29 01:07:03.054419 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-29 01:07:03.054423 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-29 01:07:03.054441 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-29 01:07:03.054446 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-29 01:07:03.054449 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-29 01:07:03.054453 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-29 01:07:03.054457 | orchestrator | 2026-03-29 01:07:03.054460 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-29 01:07:03.054464 | orchestrator | 2026-03-29 01:07:03.054468 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:03.054472 | orchestrator | Sunday 29 March 2026 01:02:37 +0000 (0:00:00.492) 0:00:01.391 ********** 2026-03-29 01:07:03.054492 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:07:03.054496 | orchestrator | 2026-03-29 01:07:03.054500 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-29 01:07:03.054504 | orchestrator | Sunday 29 March 2026 01:02:38 +0000 (0:00:00.984) 0:00:02.375 ********** 2026-03-29 01:07:03.054507 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:03.054511 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:03.054515 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:03.054519 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:03.054522 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:03.054531 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:03.054535 | orchestrator | 2026-03-29 01:07:03.054561 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-29 01:07:03.054566 | orchestrator | Sunday 29 March 2026 01:02:39 +0000 (0:00:01.106) 0:00:03.482 ********** 2026-03-29 01:07:03.054570 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:03.054574 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:03.054577 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:03.054581 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:03.054585 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:03.054589 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:03.054592 | orchestrator | 2026-03-29 01:07:03.054596 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-29 01:07:03.054600 | orchestrator | Sunday 29 March 2026 01:02:41 +0000 (0:00:01.042) 0:00:04.525 ********** 2026-03-29 01:07:03.054604 | orchestrator | ok: [testbed-node-0] => { 2026-03-29 01:07:03.054608 | orchestrator |  "changed": false, 2026-03-29 01:07:03.054612 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:03.054615 | orchestrator | } 2026-03-29 01:07:03.054619 | orchestrator | ok: [testbed-node-1] => { 2026-03-29 01:07:03.054623 | orchestrator |  "changed": false, 2026-03-29 01:07:03.054627 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:03.054630 | orchestrator | } 2026-03-29 01:07:03.054676 | orchestrator | ok: [testbed-node-2] => { 2026-03-29 01:07:03.054680 | orchestrator |  "changed": false, 2026-03-29 01:07:03.054683 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:03.054956 | orchestrator | } 2026-03-29 01:07:03.054962 | orchestrator | ok: [testbed-node-3] => { 2026-03-29 01:07:03.054965 | orchestrator |  "changed": false, 2026-03-29 01:07:03.054969 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:03.054973 | orchestrator | } 2026-03-29 01:07:03.054977 | orchestrator | ok: [testbed-node-4] => { 2026-03-29 01:07:03.054981 | orchestrator |  "changed": false, 2026-03-29 01:07:03.054984 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:03.054988 | orchestrator | } 2026-03-29 01:07:03.054992 | orchestrator | ok: [testbed-node-5] => { 2026-03-29 01:07:03.054996 | orchestrator |  "changed": false, 2026-03-29 01:07:03.054999 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:07:03.055003 | orchestrator | } 2026-03-29 01:07:03.055007 | orchestrator | 2026-03-29 01:07:03.055011 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-29 01:07:03.055020 | orchestrator | Sunday 29 March 2026 01:02:41 +0000 (0:00:00.718) 0:00:05.243 ********** 2026-03-29 01:07:03.055024 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055028 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055032 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055035 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055040 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055046 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055053 | orchestrator | 2026-03-29 01:07:03.055059 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-29 01:07:03.055065 | orchestrator | Sunday 29 March 2026 01:02:42 +0000 (0:00:00.534) 0:00:05.777 ********** 2026-03-29 01:07:03.055071 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-29 01:07:03.055077 | orchestrator | 2026-03-29 01:07:03.055084 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-29 01:07:03.055090 | orchestrator | Sunday 29 March 2026 01:02:45 +0000 (0:00:03.308) 0:00:09.086 ********** 2026-03-29 01:07:03.055096 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-29 01:07:03.055103 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-29 01:07:03.055109 | orchestrator | 2026-03-29 01:07:03.055135 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-29 01:07:03.055140 | orchestrator | Sunday 29 March 2026 01:02:52 +0000 (0:00:07.102) 0:00:16.188 ********** 2026-03-29 01:07:03.055144 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:07:03.055148 | orchestrator | 2026-03-29 01:07:03.055152 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-29 01:07:03.055155 | orchestrator | Sunday 29 March 2026 01:02:55 +0000 (0:00:03.201) 0:00:19.390 ********** 2026-03-29 01:07:03.055159 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:07:03.055163 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-29 01:07:03.055167 | orchestrator | 2026-03-29 01:07:03.055171 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-29 01:07:03.055174 | orchestrator | Sunday 29 March 2026 01:02:59 +0000 (0:00:03.793) 0:00:23.184 ********** 2026-03-29 01:07:03.055178 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:07:03.055182 | orchestrator | 2026-03-29 01:07:03.055185 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-29 01:07:03.055189 | orchestrator | Sunday 29 March 2026 01:03:02 +0000 (0:00:02.963) 0:00:26.147 ********** 2026-03-29 01:07:03.055193 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-29 01:07:03.055197 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-29 01:07:03.055201 | orchestrator | 2026-03-29 01:07:03.055204 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:03.055210 | orchestrator | Sunday 29 March 2026 01:03:10 +0000 (0:00:08.184) 0:00:34.332 ********** 2026-03-29 01:07:03.055214 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055218 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055221 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055225 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055229 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055232 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055236 | orchestrator | 2026-03-29 01:07:03.055240 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-29 01:07:03.055243 | orchestrator | Sunday 29 March 2026 01:03:11 +0000 (0:00:00.829) 0:00:35.162 ********** 2026-03-29 01:07:03.055247 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055286 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055293 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055300 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055314 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055325 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055332 | orchestrator | 2026-03-29 01:07:03.055338 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-29 01:07:03.055344 | orchestrator | Sunday 29 March 2026 01:03:15 +0000 (0:00:03.530) 0:00:38.692 ********** 2026-03-29 01:07:03.055348 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:07:03.055352 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:07:03.055355 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:07:03.055359 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:07:03.055362 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:07:03.055366 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:07:03.055369 | orchestrator | 2026-03-29 01:07:03.055373 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-29 01:07:03.055377 | orchestrator | Sunday 29 March 2026 01:03:17 +0000 (0:00:02.449) 0:00:41.142 ********** 2026-03-29 01:07:03.055380 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055384 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055387 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055391 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055394 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055398 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055401 | orchestrator | 2026-03-29 01:07:03.055405 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-29 01:07:03.055408 | orchestrator | Sunday 29 March 2026 01:03:20 +0000 (0:00:02.893) 0:00:44.035 ********** 2026-03-29 01:07:03.055413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.055504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.055507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.055511 | orchestrator | 2026-03-29 01:07:03.055515 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-29 01:07:03.055518 | orchestrator | Sunday 29 March 2026 01:03:23 +0000 (0:00:03.219) 0:00:47.255 ********** 2026-03-29 01:07:03.055522 | orchestrator | [WARNING]: Skipped 2026-03-29 01:07:03.055526 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-29 01:07:03.055530 | orchestrator | due to this access issue: 2026-03-29 01:07:03.055533 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-29 01:07:03.055537 | orchestrator | a directory 2026-03-29 01:07:03.055540 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:07:03.055544 | orchestrator | 2026-03-29 01:07:03.055560 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:03.055564 | orchestrator | Sunday 29 March 2026 01:03:24 +0000 (0:00:00.883) 0:00:48.138 ********** 2026-03-29 01:07:03.055568 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:07:03.055572 | orchestrator | 2026-03-29 01:07:03.055576 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-29 01:07:03.055579 | orchestrator | Sunday 29 March 2026 01:03:25 +0000 (0:00:01.277) 0:00:49.415 ********** 2026-03-29 01:07:03.055586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.055614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.055621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.055625 | orchestrator | 2026-03-29 01:07:03.055629 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-29 01:07:03.055632 | orchestrator | Sunday 29 March 2026 01:03:30 +0000 (0:00:04.202) 0:00:53.617 ********** 2026-03-29 01:07:03.055636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055640 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055647 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055669 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055693 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055703 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055711 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055714 | orchestrator | 2026-03-29 01:07:03.055718 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-29 01:07:03.055721 | orchestrator | Sunday 29 March 2026 01:03:34 +0000 (0:00:04.062) 0:00:57.680 ********** 2026-03-29 01:07:03.055725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055731 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055753 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055760 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055781 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055792 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055818 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055821 | orchestrator | 2026-03-29 01:07:03.055825 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-29 01:07:03.055831 | orchestrator | Sunday 29 March 2026 01:03:37 +0000 (0:00:03.108) 0:01:00.788 ********** 2026-03-29 01:07:03.055835 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055839 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055842 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055846 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055849 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055853 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055857 | orchestrator | 2026-03-29 01:07:03.055860 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-29 01:07:03.055864 | orchestrator | Sunday 29 March 2026 01:03:40 +0000 (0:00:03.536) 0:01:04.324 ********** 2026-03-29 01:07:03.055868 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055872 | orchestrator | 2026-03-29 01:07:03.055875 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-29 01:07:03.055879 | orchestrator | Sunday 29 March 2026 01:03:40 +0000 (0:00:00.118) 0:01:04.443 ********** 2026-03-29 01:07:03.055883 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055886 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055890 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055894 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055897 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055901 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055905 | orchestrator | 2026-03-29 01:07:03.055908 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-29 01:07:03.055912 | orchestrator | Sunday 29 March 2026 01:03:41 +0000 (0:00:00.700) 0:01:05.143 ********** 2026-03-29 01:07:03.055918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055922 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.055926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055932 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.055936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055939 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.055947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.055951 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.055955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055959 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.055964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.055968 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.055972 | orchestrator | 2026-03-29 01:07:03.055976 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-29 01:07:03.055982 | orchestrator | Sunday 29 March 2026 01:03:45 +0000 (0:00:03.506) 0:01:08.650 ********** 2026-03-29 01:07:03.055985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.055996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.056002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.056006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.056012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.056016 | orchestrator | 2026-03-29 01:07:03.056019 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-29 01:07:03.056023 | orchestrator | Sunday 29 March 2026 01:03:49 +0000 (0:00:04.262) 0:01:12.913 ********** 2026-03-29 01:07:03.056030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.056034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.056041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.056050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.056057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.056066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.056073 | orchestrator | 2026-03-29 01:07:03.056078 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-29 01:07:03.056084 | orchestrator | Sunday 29 March 2026 01:03:54 +0000 (0:00:05.439) 0:01:18.352 ********** 2026-03-29 01:07:03.056091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056097 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056123 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056129 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056143 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056162 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056179 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056186 | orchestrator | 2026-03-29 01:07:03.056193 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-29 01:07:03.056199 | orchestrator | Sunday 29 March 2026 01:03:57 +0000 (0:00:02.274) 0:01:20.627 ********** 2026-03-29 01:07:03.056208 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056215 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056221 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:07:03.056227 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056232 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:03.056238 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:07:03.056243 | orchestrator | 2026-03-29 01:07:03.056249 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-29 01:07:03.056255 | orchestrator | Sunday 29 March 2026 01:03:59 +0000 (0:00:02.508) 0:01:23.135 ********** 2026-03-29 01:07:03.056262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056268 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056281 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056301 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.056324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.056331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.056338 | orchestrator | 2026-03-29 01:07:03.056344 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-29 01:07:03.056350 | orchestrator | Sunday 29 March 2026 01:04:02 +0000 (0:00:03.140) 0:01:26.275 ********** 2026-03-29 01:07:03.056357 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056363 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056368 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056374 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056380 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056385 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056391 | orchestrator | 2026-03-29 01:07:03.056397 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-29 01:07:03.056403 | orchestrator | Sunday 29 March 2026 01:04:05 +0000 (0:00:02.296) 0:01:28.571 ********** 2026-03-29 01:07:03.056410 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056416 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056423 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056429 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056435 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056441 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056447 | orchestrator | 2026-03-29 01:07:03.056454 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-29 01:07:03.056460 | orchestrator | Sunday 29 March 2026 01:04:07 +0000 (0:00:02.396) 0:01:30.968 ********** 2026-03-29 01:07:03.056471 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056477 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056483 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056493 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056500 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056505 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056511 | orchestrator | 2026-03-29 01:07:03.056517 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-29 01:07:03.056523 | orchestrator | Sunday 29 March 2026 01:04:11 +0000 (0:00:04.327) 0:01:35.295 ********** 2026-03-29 01:07:03.056530 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056535 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056543 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056549 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056556 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056563 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056570 | orchestrator | 2026-03-29 01:07:03.056577 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-29 01:07:03.056583 | orchestrator | Sunday 29 March 2026 01:04:14 +0000 (0:00:02.521) 0:01:37.817 ********** 2026-03-29 01:07:03.056590 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056597 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056603 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056610 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056616 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056622 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056629 | orchestrator | 2026-03-29 01:07:03.056635 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-29 01:07:03.056642 | orchestrator | Sunday 29 March 2026 01:04:16 +0000 (0:00:01.750) 0:01:39.568 ********** 2026-03-29 01:07:03.056649 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056660 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056664 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056668 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056671 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056675 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056679 | orchestrator | 2026-03-29 01:07:03.056682 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-29 01:07:03.056686 | orchestrator | Sunday 29 March 2026 01:04:18 +0000 (0:00:02.075) 0:01:41.643 ********** 2026-03-29 01:07:03.056690 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:03.056694 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056702 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:03.056706 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056709 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:03.056713 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056717 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:03.056721 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056725 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:03.056728 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056732 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-29 01:07:03.056735 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056739 | orchestrator | 2026-03-29 01:07:03.056742 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-29 01:07:03.056746 | orchestrator | Sunday 29 March 2026 01:04:20 +0000 (0:00:01.984) 0:01:43.628 ********** 2026-03-29 01:07:03.056750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056759 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056772 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056779 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056789 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056817 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056825 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056829 | orchestrator | 2026-03-29 01:07:03.056833 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-29 01:07:03.056836 | orchestrator | Sunday 29 March 2026 01:04:22 +0000 (0:00:01.964) 0:01:45.593 ********** 2026-03-29 01:07:03.056844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056848 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056855 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056868 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056876 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.056886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.056890 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056894 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056897 | orchestrator | 2026-03-29 01:07:03.056901 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-29 01:07:03.056905 | orchestrator | Sunday 29 March 2026 01:04:24 +0000 (0:00:02.788) 0:01:48.382 ********** 2026-03-29 01:07:03.056908 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056912 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056916 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056919 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056923 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056927 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.056930 | orchestrator | 2026-03-29 01:07:03.056934 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-29 01:07:03.056938 | orchestrator | Sunday 29 March 2026 01:04:26 +0000 (0:00:01.997) 0:01:50.379 ********** 2026-03-29 01:07:03.056942 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056945 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056949 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056955 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:07:03.056959 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:07:03.056962 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:07:03.056966 | orchestrator | 2026-03-29 01:07:03.056971 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-29 01:07:03.056976 | orchestrator | Sunday 29 March 2026 01:04:32 +0000 (0:00:05.530) 0:01:55.910 ********** 2026-03-29 01:07:03.056979 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.056983 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.056987 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.056991 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.056994 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.056998 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057002 | orchestrator | 2026-03-29 01:07:03.057005 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-29 01:07:03.057009 | orchestrator | Sunday 29 March 2026 01:04:34 +0000 (0:00:01.876) 0:01:57.786 ********** 2026-03-29 01:07:03.057013 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057016 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057020 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057023 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057027 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057031 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057034 | orchestrator | 2026-03-29 01:07:03.057038 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-29 01:07:03.057044 | orchestrator | Sunday 29 March 2026 01:04:37 +0000 (0:00:02.967) 0:02:00.754 ********** 2026-03-29 01:07:03.057050 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057056 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057061 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057067 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057073 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057079 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057085 | orchestrator | 2026-03-29 01:07:03.057091 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-29 01:07:03.057097 | orchestrator | Sunday 29 March 2026 01:04:40 +0000 (0:00:03.164) 0:02:03.918 ********** 2026-03-29 01:07:03.057104 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057111 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057117 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057123 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057129 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057135 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057139 | orchestrator | 2026-03-29 01:07:03.057142 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-29 01:07:03.057146 | orchestrator | Sunday 29 March 2026 01:04:42 +0000 (0:00:02.177) 0:02:06.096 ********** 2026-03-29 01:07:03.057149 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057153 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057157 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057160 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057164 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057167 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057171 | orchestrator | 2026-03-29 01:07:03.057174 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-29 01:07:03.057178 | orchestrator | Sunday 29 March 2026 01:04:44 +0000 (0:00:01.858) 0:02:07.955 ********** 2026-03-29 01:07:03.057182 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057186 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057190 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057193 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057197 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057203 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057214 | orchestrator | 2026-03-29 01:07:03.057220 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-29 01:07:03.057231 | orchestrator | Sunday 29 March 2026 01:04:46 +0000 (0:00:02.321) 0:02:10.276 ********** 2026-03-29 01:07:03.057237 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057243 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057249 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057255 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057262 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057266 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057269 | orchestrator | 2026-03-29 01:07:03.057273 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-29 01:07:03.057276 | orchestrator | Sunday 29 March 2026 01:04:49 +0000 (0:00:03.049) 0:02:13.326 ********** 2026-03-29 01:07:03.057280 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:03.057284 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057290 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:03.057296 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057302 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:03.057308 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057314 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:03.057320 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057326 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:03.057332 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057338 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-29 01:07:03.057345 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057350 | orchestrator | 2026-03-29 01:07:03.057356 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-29 01:07:03.057361 | orchestrator | Sunday 29 March 2026 01:04:52 +0000 (0:00:02.475) 0:02:15.801 ********** 2026-03-29 01:07:03.057373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.057383 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.057402 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-29 01:07:03.057419 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.057431 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.057446 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-29 01:07:03.057457 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057460 | orchestrator | 2026-03-29 01:07:03.057464 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-29 01:07:03.057468 | orchestrator | Sunday 29 March 2026 01:04:54 +0000 (0:00:01.911) 0:02:17.713 ********** 2026-03-29 01:07:03.057474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.057482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.057486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-29 01:07:03.057493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.057497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.057504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-29 01:07:03.057507 | orchestrator | 2026-03-29 01:07:03.057511 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-29 01:07:03.057515 | orchestrator | Sunday 29 March 2026 01:04:56 +0000 (0:00:02.711) 0:02:20.424 ********** 2026-03-29 01:07:03.057518 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:07:03.057522 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:07:03.057525 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:07:03.057529 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:07:03.057532 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:07:03.057538 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:07:03.057542 | orchestrator | 2026-03-29 01:07:03.057545 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-29 01:07:03.057549 | orchestrator | Sunday 29 March 2026 01:04:57 +0000 (0:00:00.509) 0:02:20.934 ********** 2026-03-29 01:07:03.057553 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:03.057556 | orchestrator | 2026-03-29 01:07:03.057560 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-29 01:07:03.057563 | orchestrator | Sunday 29 March 2026 01:04:59 +0000 (0:00:01.893) 0:02:22.827 ********** 2026-03-29 01:07:03.057567 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:03.057570 | orchestrator | 2026-03-29 01:07:03.057574 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-29 01:07:03.057577 | orchestrator | Sunday 29 March 2026 01:05:01 +0000 (0:00:02.033) 0:02:24.861 ********** 2026-03-29 01:07:03.057581 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:03.057584 | orchestrator | 2026-03-29 01:07:03.057588 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:03.057591 | orchestrator | Sunday 29 March 2026 01:05:42 +0000 (0:00:41.444) 0:03:06.305 ********** 2026-03-29 01:07:03.057595 | orchestrator | 2026-03-29 01:07:03.057599 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:03.057602 | orchestrator | Sunday 29 March 2026 01:05:42 +0000 (0:00:00.063) 0:03:06.368 ********** 2026-03-29 01:07:03.057606 | orchestrator | 2026-03-29 01:07:03.057610 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:03.057614 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:00.206) 0:03:06.575 ********** 2026-03-29 01:07:03.057617 | orchestrator | 2026-03-29 01:07:03.057621 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:03.057625 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:00.059) 0:03:06.634 ********** 2026-03-29 01:07:03.057628 | orchestrator | 2026-03-29 01:07:03.057632 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:03.057635 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:00.060) 0:03:06.695 ********** 2026-03-29 01:07:03.057639 | orchestrator | 2026-03-29 01:07:03.057642 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-29 01:07:03.057648 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:00.059) 0:03:06.754 ********** 2026-03-29 01:07:03.057652 | orchestrator | 2026-03-29 01:07:03.057655 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-29 01:07:03.057660 | orchestrator | Sunday 29 March 2026 01:05:43 +0000 (0:00:00.061) 0:03:06.816 ********** 2026-03-29 01:07:03.057664 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:07:03.057667 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:07:03.057671 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:07:03.057675 | orchestrator | 2026-03-29 01:07:03.057678 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-29 01:07:03.057682 | orchestrator | Sunday 29 March 2026 01:06:10 +0000 (0:00:27.633) 0:03:34.449 ********** 2026-03-29 01:07:03.057685 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:07:03.057689 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:07:03.057692 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:07:03.057696 | orchestrator | 2026-03-29 01:07:03.057700 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:07:03.057704 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:03.057708 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-29 01:07:03.057712 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-29 01:07:03.057716 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:03.057719 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:03.057723 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-29 01:07:03.057726 | orchestrator | 2026-03-29 01:07:03.057730 | orchestrator | 2026-03-29 01:07:03.057734 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:07:03.057737 | orchestrator | Sunday 29 March 2026 01:07:01 +0000 (0:00:50.596) 0:04:25.047 ********** 2026-03-29 01:07:03.057741 | orchestrator | =============================================================================== 2026-03-29 01:07:03.057744 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.60s 2026-03-29 01:07:03.057748 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.44s 2026-03-29 01:07:03.057751 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.63s 2026-03-29 01:07:03.057755 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.18s 2026-03-29 01:07:03.057759 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.10s 2026-03-29 01:07:03.057762 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.53s 2026-03-29 01:07:03.057766 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.44s 2026-03-29 01:07:03.057769 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 4.33s 2026-03-29 01:07:03.057775 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.26s 2026-03-29 01:07:03.057779 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.20s 2026-03-29 01:07:03.057782 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.06s 2026-03-29 01:07:03.057786 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.79s 2026-03-29 01:07:03.057790 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.54s 2026-03-29 01:07:03.057826 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.53s 2026-03-29 01:07:03.057831 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.51s 2026-03-29 01:07:03.057834 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.31s 2026-03-29 01:07:03.057838 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.22s 2026-03-29 01:07:03.057841 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.20s 2026-03-29 01:07:03.057845 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.16s 2026-03-29 01:07:03.057848 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.14s 2026-03-29 01:07:03.057852 | orchestrator | 2026-03-29 01:07:03 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:03.057856 | orchestrator | 2026-03-29 01:07:03 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:03.057859 | orchestrator | 2026-03-29 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:06.075109 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:06.075336 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:06.076505 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:06.076984 | orchestrator | 2026-03-29 01:07:06 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:06.077011 | orchestrator | 2026-03-29 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:09.102874 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:09.105896 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:09.106295 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:09.106917 | orchestrator | 2026-03-29 01:07:09 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:09.106940 | orchestrator | 2026-03-29 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:12.138146 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:12.138766 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:12.139751 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:12.139769 | orchestrator | 2026-03-29 01:07:12 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:12.139774 | orchestrator | 2026-03-29 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:15.156310 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:15.156814 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:15.157502 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:15.158304 | orchestrator | 2026-03-29 01:07:15 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:15.158325 | orchestrator | 2026-03-29 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:18.183651 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:18.183890 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:18.185139 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:18.185794 | orchestrator | 2026-03-29 01:07:18 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:18.186418 | orchestrator | 2026-03-29 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:21.348172 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:21.362048 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:21.406182 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:21.406235 | orchestrator | 2026-03-29 01:07:21 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:21.406244 | orchestrator | 2026-03-29 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:24.415217 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:24.415474 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:24.416309 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:24.417063 | orchestrator | 2026-03-29 01:07:24 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:24.417299 | orchestrator | 2026-03-29 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:27.523363 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:27.523934 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:27.524646 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:27.525522 | orchestrator | 2026-03-29 01:07:27 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:27.525561 | orchestrator | 2026-03-29 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:30.557484 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:30.558067 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:30.558864 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:30.559588 | orchestrator | 2026-03-29 01:07:30 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:30.559865 | orchestrator | 2026-03-29 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:33.589317 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:33.589711 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:33.590407 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:33.590988 | orchestrator | 2026-03-29 01:07:33 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:33.591021 | orchestrator | 2026-03-29 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:36.620223 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:36.621168 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:36.622320 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:36.623090 | orchestrator | 2026-03-29 01:07:36 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:36.623121 | orchestrator | 2026-03-29 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:39.662509 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:39.663548 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:39.664299 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:39.665247 | orchestrator | 2026-03-29 01:07:39 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:39.665284 | orchestrator | 2026-03-29 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:42.866167 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:42.866868 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:42.867515 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:42.868238 | orchestrator | 2026-03-29 01:07:42 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:42.868513 | orchestrator | 2026-03-29 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:45.897237 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:45.898062 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:45.898105 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:45.898695 | orchestrator | 2026-03-29 01:07:45 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:45.898721 | orchestrator | 2026-03-29 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:48.924810 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:48.925185 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:48.925966 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:48.926791 | orchestrator | 2026-03-29 01:07:48 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:48.926967 | orchestrator | 2026-03-29 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:51.957532 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:51.957995 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:51.958807 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:51.959552 | orchestrator | 2026-03-29 01:07:51 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:51.959647 | orchestrator | 2026-03-29 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:55.004510 | orchestrator | 2026-03-29 01:07:55 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:55.006850 | orchestrator | 2026-03-29 01:07:55 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:55.008569 | orchestrator | 2026-03-29 01:07:55 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:55.010504 | orchestrator | 2026-03-29 01:07:55 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:55.010562 | orchestrator | 2026-03-29 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:07:58.062238 | orchestrator | 2026-03-29 01:07:58 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:07:58.062299 | orchestrator | 2026-03-29 01:07:58 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:07:58.062637 | orchestrator | 2026-03-29 01:07:58 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:07:58.063551 | orchestrator | 2026-03-29 01:07:58 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:07:58.063579 | orchestrator | 2026-03-29 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:01.118393 | orchestrator | 2026-03-29 01:08:01 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:01.120259 | orchestrator | 2026-03-29 01:08:01 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:01.122351 | orchestrator | 2026-03-29 01:08:01 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:01.124097 | orchestrator | 2026-03-29 01:08:01 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state STARTED 2026-03-29 01:08:01.124153 | orchestrator | 2026-03-29 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:04.172750 | orchestrator | 2026-03-29 01:08:04 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:04.174863 | orchestrator | 2026-03-29 01:08:04 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:04.177462 | orchestrator | 2026-03-29 01:08:04 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:04.180926 | orchestrator | 2026-03-29 01:08:04 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:04.187918 | orchestrator | 2026-03-29 01:08:04 | INFO  | Task 25b2c47b-b217-4bd9-9905-e91cb4f3a89a is in state SUCCESS 2026-03-29 01:08:04.189235 | orchestrator | 2026-03-29 01:08:04.189267 | orchestrator | 2026-03-29 01:08:04.189272 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:08:04.189276 | orchestrator | 2026-03-29 01:08:04.189279 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:08:04.189284 | orchestrator | Sunday 29 March 2026 01:04:59 +0000 (0:00:00.253) 0:00:00.253 ********** 2026-03-29 01:08:04.189288 | orchestrator | ok: [testbed-manager] 2026-03-29 01:08:04.189293 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:08:04.189299 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:08:04.189304 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:08:04.189310 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:08:04.189328 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:08:04.189332 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:08:04.189336 | orchestrator | 2026-03-29 01:08:04.189340 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:08:04.189344 | orchestrator | Sunday 29 March 2026 01:04:59 +0000 (0:00:00.818) 0:00:01.072 ********** 2026-03-29 01:08:04.189348 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-29 01:08:04.189352 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-29 01:08:04.189355 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-29 01:08:04.189359 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-29 01:08:04.189363 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-29 01:08:04.189366 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-29 01:08:04.189370 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-29 01:08:04.189374 | orchestrator | 2026-03-29 01:08:04.189377 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-29 01:08:04.189381 | orchestrator | 2026-03-29 01:08:04.189391 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-29 01:08:04.189395 | orchestrator | Sunday 29 March 2026 01:05:00 +0000 (0:00:00.690) 0:00:01.762 ********** 2026-03-29 01:08:04.189399 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:08:04.189404 | orchestrator | 2026-03-29 01:08:04.189408 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-29 01:08:04.189414 | orchestrator | Sunday 29 March 2026 01:05:02 +0000 (0:00:01.331) 0:00:03.094 ********** 2026-03-29 01:08:04.189421 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:08:04.189428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189458 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189602 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:08:04.189613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189653 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.189947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.189950 | orchestrator | 2026-03-29 01:08:04.189954 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-29 01:08:04.189958 | orchestrator | Sunday 29 March 2026 01:05:04 +0000 (0:00:02.454) 0:00:05.548 ********** 2026-03-29 01:08:04.189961 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:08:04.189965 | orchestrator | 2026-03-29 01:08:04.189969 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-29 01:08:04.189972 | orchestrator | Sunday 29 March 2026 01:05:05 +0000 (0:00:01.326) 0:00:06.875 ********** 2026-03-29 01:08:04.189975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189986 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:08:04.189993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.189997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190060 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190121 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.190152 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:08:04.190160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190266 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190271 | orchestrator | 2026-03-29 01:08:04.190275 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-29 01:08:04.190280 | orchestrator | Sunday 29 March 2026 01:05:11 +0000 (0:00:05.383) 0:00:12.258 ********** 2026-03-29 01:08:04.190286 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 01:08:04.190296 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190302 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190312 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 01:08:04.190320 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190354 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.190361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190399 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.190403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190406 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.190409 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.190414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190427 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.190430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190443 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.190446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190462 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.190465 | orchestrator | 2026-03-29 01:08:04.190468 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-29 01:08:04.190472 | orchestrator | Sunday 29 March 2026 01:05:12 +0000 (0:00:01.263) 0:00:13.522 ********** 2026-03-29 01:08:04.190475 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-29 01:08:04.190478 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190482 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190487 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-29 01:08:04.190583 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190660 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.190666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190676 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.190680 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.190683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-29 01:08:04.190796 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.190802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190817 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.190820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190840 | orchestrator | skip2026-03-29 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:04.190846 | orchestrator | ping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190851 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.190856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-29 01:08:04.190864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-29 01:08:04.190873 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.190879 | orchestrator | 2026-03-29 01:08:04.190883 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-29 01:08:04.190889 | orchestrator | Sunday 29 March 2026 01:05:14 +0000 (0:00:01.699) 0:00:15.221 ********** 2026-03-29 01:08:04.190894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190909 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:08:04.190955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.190983 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.190994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191031 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191064 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:08:04.191068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.191085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191088 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.191101 | orchestrator | 2026-03-29 01:08:04.191104 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-29 01:08:04.191107 | orchestrator | Sunday 29 March 2026 01:05:19 +0000 (0:00:05.543) 0:00:20.765 ********** 2026-03-29 01:08:04.191111 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:08:04.191114 | orchestrator | 2026-03-29 01:08:04.191117 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-29 01:08:04.191120 | orchestrator | Sunday 29 March 2026 01:05:20 +0000 (0:00:01.050) 0:00:21.816 ********** 2026-03-29 01:08:04.191124 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312133, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3805478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191130 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312133, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3805478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191137 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312133, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3805478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191140 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312133, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3805478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191146 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312159, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3853736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191149 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312133, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3805478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191152 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312159, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3853736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191158 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312133, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3805478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191162 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312159, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3853736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191167 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312159, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3853736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191170 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312159, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3853736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191175 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1312133, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3805478, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.191179 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312159, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3853736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191182 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312131, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3795445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191188 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312131, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3795445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191191 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312131, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3795445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191196 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312131, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3795445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191200 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312143, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3846877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191204 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312131, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3795445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191208 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312131, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3795445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191211 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312143, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3846877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191469 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312143, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3846877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191478 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312143, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3846877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191485 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1312159, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3853736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.191489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312143, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3846877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191496 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312129, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3791766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191500 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312129, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3791766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191507 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312143, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3846877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191511 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312129, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3791766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191515 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312129, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3791766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191522 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312129, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3791766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191526 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312134, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3807378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191531 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312129, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3791766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191535 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312134, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3807378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191541 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312134, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3807378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312134, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3807378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191549 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1312131, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3795445, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.191555 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312134, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3807378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191559 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312134, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3807378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191564 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312142, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191568 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312142, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191574 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312142, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191578 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312142, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191582 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312142, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191587 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312142, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191591 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312135, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.380903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191595 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312135, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.380903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191602 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312135, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.380903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191608 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312135, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.380903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191612 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312132, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3802938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191616 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312135, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.380903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191622 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312135, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.380903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191626 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312132, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3802938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191630 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312132, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3802938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191638 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312132, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3802938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191642 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312156, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3851898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191646 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1312143, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3846877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.191650 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312156, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3851898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191654 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312156, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3851898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191660 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312132, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3802938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191685 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312127, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3788238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191693 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312127, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3788238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191697 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312156, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3851898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191700 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312132, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3802938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191776 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312156, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3851898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191783 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312127, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3788238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191789 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312167, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.386728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191792 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312167, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.386728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191909 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312127, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3788238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191915 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312127, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3788238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191919 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312156, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3851898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312154, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3850088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191927 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312154, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3850088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191937 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312167, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.386728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191945 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312167, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.386728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191958 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312167, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.386728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191963 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312154, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3850088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191969 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312130, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3793843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191974 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1312129, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3791766, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.191978 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312127, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3788238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191987 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312130, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3793843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.191997 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312154, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3850088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192006 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312128, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.378981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192012 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312154, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3850088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312130, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3793843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192023 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312140, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192028 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312128, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.378981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192037 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312167, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.386728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192046 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312130, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3793843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192054 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312130, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3793843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192058 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312128, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.378981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192062 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312136, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3810961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192065 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312128, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.378981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192068 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312140, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192074 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312154, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3850088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192083 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312140, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192090 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1312134, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3807378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192093 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312140, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192097 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312128, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.378981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192100 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312165, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3860788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192103 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.192107 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312136, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3810961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192116 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312130, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3793843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192120 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312136, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3810961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312136, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3810961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192128 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312165, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3860788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192131 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.192134 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312140, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192138 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312165, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3860788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192141 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.192144 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1312142, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192152 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312128, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.378981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192155 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312136, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3810961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192160 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312165, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3860788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192164 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.192167 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312140, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312165, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3860788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192173 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.192177 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312136, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3810961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192182 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1312135, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.380903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192187 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312165, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3860788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-29 01:08:04.192190 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.192194 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1312132, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3802938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192199 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312156, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3851898, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192202 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312127, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3788238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192205 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1312167, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.386728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192208 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1312154, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3850088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192214 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1312130, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3793843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192220 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1312128, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.378981, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192223 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1312140, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3816006, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192228 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1312136, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3810961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192232 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1312165, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3860788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-29 01:08:04.192237 | orchestrator | 2026-03-29 01:08:04.192242 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-29 01:08:04.192249 | orchestrator | Sunday 29 March 2026 01:05:46 +0000 (0:00:25.678) 0:00:47.494 ********** 2026-03-29 01:08:04.192256 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:08:04.192261 | orchestrator | 2026-03-29 01:08:04.192266 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-29 01:08:04.192271 | orchestrator | Sunday 29 March 2026 01:05:47 +0000 (0:00:00.858) 0:00:48.352 ********** 2026-03-29 01:08:04.192276 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.192281 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192286 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-29 01:08:04.192296 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192300 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-29 01:08:04.192306 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:08:04.192310 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.192315 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192320 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-29 01:08:04.192325 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192330 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-29 01:08:04.192335 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.192340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192345 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-29 01:08:04.192349 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192354 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-29 01:08:04.192359 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.192364 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192368 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-29 01:08:04.192374 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192379 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-29 01:08:04.192384 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.192390 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192395 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-29 01:08:04.192404 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192409 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-29 01:08:04.192415 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.192419 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192424 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-29 01:08:04.192429 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192434 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-29 01:08:04.192439 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.192444 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192449 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-29 01:08:04.192453 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-29 01:08:04.192457 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-29 01:08:04.192462 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:08:04.192466 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-29 01:08:04.192471 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-29 01:08:04.192477 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:08:04.192482 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 01:08:04.192487 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 01:08:04.192492 | orchestrator | 2026-03-29 01:08:04.192498 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-29 01:08:04.192503 | orchestrator | Sunday 29 March 2026 01:05:49 +0000 (0:00:02.698) 0:00:51.050 ********** 2026-03-29 01:08:04.192511 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:08:04.192516 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.192521 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:08:04.192531 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.192536 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:08:04.192542 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.192546 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:08:04.192550 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.192554 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:08:04.192557 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.192561 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-29 01:08:04.192565 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.192568 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-29 01:08:04.192572 | orchestrator | 2026-03-29 01:08:04.192576 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-29 01:08:04.192581 | orchestrator | Sunday 29 March 2026 01:06:05 +0000 (0:00:15.298) 0:01:06.348 ********** 2026-03-29 01:08:04.192586 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:08:04.192594 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.192601 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:08:04.192605 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.192611 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:08:04.192616 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.192621 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:08:04.192627 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.192632 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:08:04.192637 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.192643 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-29 01:08:04.192647 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.192651 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-29 01:08:04.192655 | orchestrator | 2026-03-29 01:08:04.192659 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-29 01:08:04.192672 | orchestrator | Sunday 29 March 2026 01:06:09 +0000 (0:00:04.365) 0:01:10.714 ********** 2026-03-29 01:08:04.192676 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:08:04.192684 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:08:04.192688 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.192692 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:08:04.192695 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-29 01:08:04.192755 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.192760 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.192764 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:08:04.192770 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.192776 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:08:04.192788 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.192796 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-29 01:08:04.192802 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.192808 | orchestrator | 2026-03-29 01:08:04.192814 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-29 01:08:04.192819 | orchestrator | Sunday 29 March 2026 01:06:13 +0000 (0:00:03.663) 0:01:14.377 ********** 2026-03-29 01:08:04.192824 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:08:04.192830 | orchestrator | 2026-03-29 01:08:04.192835 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-29 01:08:04.192842 | orchestrator | Sunday 29 March 2026 01:06:14 +0000 (0:00:01.184) 0:01:15.561 ********** 2026-03-29 01:08:04.192848 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.192854 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.192859 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.192866 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.192874 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.192880 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.192886 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.192891 | orchestrator | 2026-03-29 01:08:04.192901 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-29 01:08:04.192907 | orchestrator | Sunday 29 March 2026 01:06:15 +0000 (0:00:00.884) 0:01:16.446 ********** 2026-03-29 01:08:04.192912 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.192917 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.192923 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.192929 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.192935 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:08:04.192941 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:08:04.192948 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:08:04.192954 | orchestrator | 2026-03-29 01:08:04.192960 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-29 01:08:04.192966 | orchestrator | Sunday 29 March 2026 01:06:17 +0000 (0:00:02.504) 0:01:18.950 ********** 2026-03-29 01:08:04.192972 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:08:04.192977 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.192981 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:08:04.192984 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.192987 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:08:04.192990 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:08:04.192993 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.192996 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.192999 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:08:04.193002 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.193005 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:08:04.193008 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.193011 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-29 01:08:04.193015 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.193018 | orchestrator | 2026-03-29 01:08:04.193021 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-29 01:08:04.193024 | orchestrator | Sunday 29 March 2026 01:06:19 +0000 (0:00:01.822) 0:01:20.773 ********** 2026-03-29 01:08:04.193027 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:08:04.193034 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.193037 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:08:04.193040 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.193044 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:08:04.193047 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.193050 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:08:04.193053 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.193056 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:08:04.193059 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.193062 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-29 01:08:04.193065 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.193068 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-29 01:08:04.193071 | orchestrator | 2026-03-29 01:08:04.193075 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-29 01:08:04.193081 | orchestrator | Sunday 29 March 2026 01:06:21 +0000 (0:00:01.766) 0:01:22.540 ********** 2026-03-29 01:08:04.193085 | orchestrator | [WARNING]: Skipped 2026-03-29 01:08:04.193088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-29 01:08:04.193091 | orchestrator | due to this access issue: 2026-03-29 01:08:04.193095 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-29 01:08:04.193098 | orchestrator | not a directory 2026-03-29 01:08:04.193101 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:08:04.193104 | orchestrator | 2026-03-29 01:08:04.193107 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-29 01:08:04.193110 | orchestrator | Sunday 29 March 2026 01:06:22 +0000 (0:00:01.339) 0:01:23.879 ********** 2026-03-29 01:08:04.193113 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.193116 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.193120 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.193123 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.193126 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.193129 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.193132 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.193135 | orchestrator | 2026-03-29 01:08:04.193138 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-29 01:08:04.193141 | orchestrator | Sunday 29 March 2026 01:06:24 +0000 (0:00:01.400) 0:01:25.280 ********** 2026-03-29 01:08:04.193144 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.193148 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:08:04.193151 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:08:04.193154 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:08:04.193157 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:08:04.193160 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:08:04.193168 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:08:04.193171 | orchestrator | 2026-03-29 01:08:04.193174 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-29 01:08:04.193178 | orchestrator | Sunday 29 March 2026 01:06:25 +0000 (0:00:01.017) 0:01:26.297 ********** 2026-03-29 01:08:04.193181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.193187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.193191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-29 01:08:04.193195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.193202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.193206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.193211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.193214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193221 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-29 01:08:04.193224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193240 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193245 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-29 01:08:04.193284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-29 01:08:04.193297 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-29 01:08:04.193323 | orchestrator | 2026-03-29 01:08:04.193328 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-29 01:08:04.193334 | orchestrator | Sunday 29 March 2026 01:06:29 +0000 (0:00:04.554) 0:01:30.852 ********** 2026-03-29 01:08:04.193340 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-29 01:08:04.193345 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:08:04.193350 | orchestrator | 2026-03-29 01:08:04.193356 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:08:04.193361 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:01.308) 0:01:32.161 ********** 2026-03-29 01:08:04.193366 | orchestrator | 2026-03-29 01:08:04.193372 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:08:04.193375 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:00.087) 0:01:32.249 ********** 2026-03-29 01:08:04.193378 | orchestrator | 2026-03-29 01:08:04.193381 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:08:04.193384 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:00.071) 0:01:32.320 ********** 2026-03-29 01:08:04.193387 | orchestrator | 2026-03-29 01:08:04.193391 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:08:04.193394 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:00.066) 0:01:32.387 ********** 2026-03-29 01:08:04.193397 | orchestrator | 2026-03-29 01:08:04.193400 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:08:04.193403 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:00.234) 0:01:32.621 ********** 2026-03-29 01:08:04.193406 | orchestrator | 2026-03-29 01:08:04.193410 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:08:04.193413 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:00.076) 0:01:32.698 ********** 2026-03-29 01:08:04.193416 | orchestrator | 2026-03-29 01:08:04.193419 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-29 01:08:04.193422 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:00.064) 0:01:32.762 ********** 2026-03-29 01:08:04.193425 | orchestrator | 2026-03-29 01:08:04.193428 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-29 01:08:04.193432 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:00.088) 0:01:32.851 ********** 2026-03-29 01:08:04.193435 | orchestrator | changed: [testbed-manager] 2026-03-29 01:08:04.193439 | orchestrator | 2026-03-29 01:08:04.193444 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-29 01:08:04.193449 | orchestrator | Sunday 29 March 2026 01:06:46 +0000 (0:00:14.924) 0:01:47.776 ********** 2026-03-29 01:08:04.193454 | orchestrator | changed: [testbed-manager] 2026-03-29 01:08:04.193460 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:08:04.193465 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:08:04.193469 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:08:04.193475 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:08:04.193480 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:08:04.193486 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:08:04.193491 | orchestrator | 2026-03-29 01:08:04.193496 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-29 01:08:04.193501 | orchestrator | Sunday 29 March 2026 01:07:01 +0000 (0:00:14.417) 0:02:02.193 ********** 2026-03-29 01:08:04.193506 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:08:04.193512 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:08:04.193516 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:08:04.193519 | orchestrator | 2026-03-29 01:08:04.193522 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-29 01:08:04.193528 | orchestrator | Sunday 29 March 2026 01:07:11 +0000 (0:00:10.364) 0:02:12.557 ********** 2026-03-29 01:08:04.193532 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:08:04.193535 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:08:04.193538 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:08:04.193541 | orchestrator | 2026-03-29 01:08:04.193544 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-29 01:08:04.193547 | orchestrator | Sunday 29 March 2026 01:07:17 +0000 (0:00:06.281) 0:02:18.839 ********** 2026-03-29 01:08:04.193551 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:08:04.193554 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:08:04.193557 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:08:04.193560 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:08:04.193566 | orchestrator | changed: [testbed-manager] 2026-03-29 01:08:04.193569 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:08:04.193572 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:08:04.193575 | orchestrator | 2026-03-29 01:08:04.193578 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-29 01:08:04.193581 | orchestrator | Sunday 29 March 2026 01:07:33 +0000 (0:00:15.652) 0:02:34.492 ********** 2026-03-29 01:08:04.193585 | orchestrator | changed: [testbed-manager] 2026-03-29 01:08:04.193588 | orchestrator | 2026-03-29 01:08:04.193591 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-29 01:08:04.193612 | orchestrator | Sunday 29 March 2026 01:07:40 +0000 (0:00:07.042) 0:02:41.535 ********** 2026-03-29 01:08:04.193615 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:08:04.193618 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:08:04.193621 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:08:04.193625 | orchestrator | 2026-03-29 01:08:04.193628 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-29 01:08:04.193631 | orchestrator | Sunday 29 March 2026 01:07:52 +0000 (0:00:11.620) 0:02:53.155 ********** 2026-03-29 01:08:04.193634 | orchestrator | changed: [testbed-manager] 2026-03-29 01:08:04.193637 | orchestrator | 2026-03-29 01:08:04.193640 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-29 01:08:04.193643 | orchestrator | Sunday 29 March 2026 01:07:56 +0000 (0:00:04.657) 0:02:57.813 ********** 2026-03-29 01:08:04.193646 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:08:04.193649 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:08:04.193653 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:08:04.193656 | orchestrator | 2026-03-29 01:08:04.193659 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:08:04.193664 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-29 01:08:04.193668 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:08:04.193671 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:08:04.193674 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:08:04.193677 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:08:04.193681 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:08:04.193684 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:08:04.193687 | orchestrator | 2026-03-29 01:08:04.193690 | orchestrator | 2026-03-29 01:08:04.193695 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:08:04.193699 | orchestrator | Sunday 29 March 2026 01:08:02 +0000 (0:00:05.827) 0:03:03.640 ********** 2026-03-29 01:08:04.193714 | orchestrator | =============================================================================== 2026-03-29 01:08:04.193719 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.68s 2026-03-29 01:08:04.193722 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.65s 2026-03-29 01:08:04.193725 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.30s 2026-03-29 01:08:04.193728 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.92s 2026-03-29 01:08:04.193732 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.42s 2026-03-29 01:08:04.193735 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.62s 2026-03-29 01:08:04.193738 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.36s 2026-03-29 01:08:04.193741 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.04s 2026-03-29 01:08:04.193744 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.28s 2026-03-29 01:08:04.193747 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.83s 2026-03-29 01:08:04.193750 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.54s 2026-03-29 01:08:04.193753 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.38s 2026-03-29 01:08:04.193758 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.66s 2026-03-29 01:08:04.193763 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.55s 2026-03-29 01:08:04.193768 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.37s 2026-03-29 01:08:04.193772 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.66s 2026-03-29 01:08:04.193780 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.70s 2026-03-29 01:08:04.193785 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.50s 2026-03-29 01:08:04.193790 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.45s 2026-03-29 01:08:04.193795 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.82s 2026-03-29 01:08:07.243784 | orchestrator | 2026-03-29 01:08:07 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:07.246438 | orchestrator | 2026-03-29 01:08:07 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:07.248151 | orchestrator | 2026-03-29 01:08:07 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:07.249660 | orchestrator | 2026-03-29 01:08:07 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:07.249841 | orchestrator | 2026-03-29 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:10.289376 | orchestrator | 2026-03-29 01:08:10 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:10.291456 | orchestrator | 2026-03-29 01:08:10 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:10.293796 | orchestrator | 2026-03-29 01:08:10 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:10.296663 | orchestrator | 2026-03-29 01:08:10 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:10.296809 | orchestrator | 2026-03-29 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:13.338236 | orchestrator | 2026-03-29 01:08:13 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:13.340192 | orchestrator | 2026-03-29 01:08:13 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:13.342797 | orchestrator | 2026-03-29 01:08:13 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:13.344974 | orchestrator | 2026-03-29 01:08:13 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:13.345021 | orchestrator | 2026-03-29 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:16.388070 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:16.389166 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:16.390459 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:16.391867 | orchestrator | 2026-03-29 01:08:16 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:16.393203 | orchestrator | 2026-03-29 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:19.436477 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:19.436532 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:19.436538 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:19.437128 | orchestrator | 2026-03-29 01:08:19 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:19.437166 | orchestrator | 2026-03-29 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:22.477005 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:22.479163 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:22.481423 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:22.483658 | orchestrator | 2026-03-29 01:08:22 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:22.483742 | orchestrator | 2026-03-29 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:25.526300 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:25.530652 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:25.534623 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:25.535746 | orchestrator | 2026-03-29 01:08:25 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:25.535792 | orchestrator | 2026-03-29 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:28.580757 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:28.582277 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:28.583770 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:28.584764 | orchestrator | 2026-03-29 01:08:28 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:28.584886 | orchestrator | 2026-03-29 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:31.623607 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:31.623767 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:31.624618 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:31.625482 | orchestrator | 2026-03-29 01:08:31 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:31.625835 | orchestrator | 2026-03-29 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:34.669182 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:34.669530 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:34.672046 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:34.672620 | orchestrator | 2026-03-29 01:08:34 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:34.672650 | orchestrator | 2026-03-29 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:37.695055 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:37.695119 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:37.695607 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:37.696597 | orchestrator | 2026-03-29 01:08:37 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:37.696628 | orchestrator | 2026-03-29 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:40.736387 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:40.737402 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:40.739161 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:40.739243 | orchestrator | 2026-03-29 01:08:40 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:40.739257 | orchestrator | 2026-03-29 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:43.779241 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:43.779543 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:43.780367 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:43.781154 | orchestrator | 2026-03-29 01:08:43 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:43.781182 | orchestrator | 2026-03-29 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:46.818052 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:46.822479 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:46.827137 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:46.830183 | orchestrator | 2026-03-29 01:08:46 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:46.830226 | orchestrator | 2026-03-29 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:49.866577 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:49.868392 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:49.870872 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:49.873176 | orchestrator | 2026-03-29 01:08:49 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:49.873265 | orchestrator | 2026-03-29 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:52.921355 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:52.923824 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:52.925154 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:52.927861 | orchestrator | 2026-03-29 01:08:52 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:52.928380 | orchestrator | 2026-03-29 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:55.967719 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:55.970173 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:55.971416 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:55.973471 | orchestrator | 2026-03-29 01:08:55 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:55.973605 | orchestrator | 2026-03-29 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:08:59.013117 | orchestrator | 2026-03-29 01:08:59 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:08:59.013861 | orchestrator | 2026-03-29 01:08:59 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:08:59.014678 | orchestrator | 2026-03-29 01:08:59 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state STARTED 2026-03-29 01:08:59.016196 | orchestrator | 2026-03-29 01:08:59 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:08:59.016234 | orchestrator | 2026-03-29 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:02.062348 | orchestrator | 2026-03-29 01:09:02 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:02.063443 | orchestrator | 2026-03-29 01:09:02 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:02.067497 | orchestrator | 2026-03-29 01:09:02 | INFO  | Task a8fb9e02-e150-4d21-8e85-38823163292b is in state SUCCESS 2026-03-29 01:09:02.068904 | orchestrator | 2026-03-29 01:09:02.068947 | orchestrator | 2026-03-29 01:09:02.068955 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:09:02.068963 | orchestrator | 2026-03-29 01:09:02.068971 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:09:02.068978 | orchestrator | Sunday 29 March 2026 01:06:13 +0000 (0:00:00.203) 0:00:00.203 ********** 2026-03-29 01:09:02.068985 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:02.069039 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:02.069048 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:02.069055 | orchestrator | 2026-03-29 01:09:02.069063 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:09:02.069207 | orchestrator | Sunday 29 March 2026 01:06:13 +0000 (0:00:00.309) 0:00:00.513 ********** 2026-03-29 01:09:02.069217 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-29 01:09:02.069225 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-29 01:09:02.069232 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-29 01:09:02.069240 | orchestrator | 2026-03-29 01:09:02.069247 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-29 01:09:02.069255 | orchestrator | 2026-03-29 01:09:02.069262 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:09:02.069269 | orchestrator | Sunday 29 March 2026 01:06:14 +0000 (0:00:00.620) 0:00:01.134 ********** 2026-03-29 01:09:02.069274 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:02.069280 | orchestrator | 2026-03-29 01:09:02.069285 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-29 01:09:02.069290 | orchestrator | Sunday 29 March 2026 01:06:15 +0000 (0:00:00.929) 0:00:02.063 ********** 2026-03-29 01:09:02.069297 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-29 01:09:02.069304 | orchestrator | 2026-03-29 01:09:02.069311 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-29 01:09:02.069318 | orchestrator | Sunday 29 March 2026 01:06:18 +0000 (0:00:03.244) 0:00:05.307 ********** 2026-03-29 01:09:02.069326 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-29 01:09:02.069333 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-29 01:09:02.069340 | orchestrator | 2026-03-29 01:09:02.069348 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-29 01:09:02.069355 | orchestrator | Sunday 29 March 2026 01:06:24 +0000 (0:00:06.424) 0:00:11.732 ********** 2026-03-29 01:09:02.069362 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:09:02.069369 | orchestrator | 2026-03-29 01:09:02.069375 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-29 01:09:02.069380 | orchestrator | Sunday 29 March 2026 01:06:28 +0000 (0:00:03.265) 0:00:14.998 ********** 2026-03-29 01:09:02.069385 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:09:02.069390 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-29 01:09:02.069395 | orchestrator | 2026-03-29 01:09:02.069400 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-29 01:09:02.069405 | orchestrator | Sunday 29 March 2026 01:06:31 +0000 (0:00:03.787) 0:00:18.786 ********** 2026-03-29 01:09:02.069410 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:09:02.069415 | orchestrator | 2026-03-29 01:09:02.069419 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-29 01:09:02.069424 | orchestrator | Sunday 29 March 2026 01:06:35 +0000 (0:00:03.321) 0:00:22.108 ********** 2026-03-29 01:09:02.069429 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-29 01:09:02.069434 | orchestrator | 2026-03-29 01:09:02.069439 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-29 01:09:02.069444 | orchestrator | Sunday 29 March 2026 01:06:38 +0000 (0:00:03.375) 0:00:25.483 ********** 2026-03-29 01:09:02.069470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069503 | orchestrator | 2026-03-29 01:09:02.069508 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:09:02.069513 | orchestrator | Sunday 29 March 2026 01:06:42 +0000 (0:00:04.122) 0:00:29.606 ********** 2026-03-29 01:09:02.069518 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:02.069523 | orchestrator | 2026-03-29 01:09:02.069533 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-29 01:09:02.069538 | orchestrator | Sunday 29 March 2026 01:06:43 +0000 (0:00:00.556) 0:00:30.163 ********** 2026-03-29 01:09:02.069543 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.069548 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:02.069552 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:02.069557 | orchestrator | 2026-03-29 01:09:02.069562 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-29 01:09:02.069566 | orchestrator | Sunday 29 March 2026 01:06:46 +0000 (0:00:03.633) 0:00:33.796 ********** 2026-03-29 01:09:02.069571 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:02.069576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:02.069580 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:02.069585 | orchestrator | 2026-03-29 01:09:02.069590 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-29 01:09:02.069594 | orchestrator | Sunday 29 March 2026 01:06:49 +0000 (0:00:02.793) 0:00:36.590 ********** 2026-03-29 01:09:02.069599 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:02.069604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:02.069609 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:02.069640 | orchestrator | 2026-03-29 01:09:02.069645 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-29 01:09:02.069650 | orchestrator | Sunday 29 March 2026 01:06:51 +0000 (0:00:01.383) 0:00:37.973 ********** 2026-03-29 01:09:02.069655 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:02.069660 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:02.069664 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:02.069669 | orchestrator | 2026-03-29 01:09:02.069674 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-29 01:09:02.069679 | orchestrator | Sunday 29 March 2026 01:06:51 +0000 (0:00:00.807) 0:00:38.780 ********** 2026-03-29 01:09:02.069683 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.069688 | orchestrator | 2026-03-29 01:09:02.069693 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-29 01:09:02.069698 | orchestrator | Sunday 29 March 2026 01:06:51 +0000 (0:00:00.113) 0:00:38.894 ********** 2026-03-29 01:09:02.069702 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.069707 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.069716 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.069721 | orchestrator | 2026-03-29 01:09:02.069725 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:09:02.069730 | orchestrator | Sunday 29 March 2026 01:06:52 +0000 (0:00:00.260) 0:00:39.154 ********** 2026-03-29 01:09:02.069734 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:02.069739 | orchestrator | 2026-03-29 01:09:02.069743 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-29 01:09:02.069748 | orchestrator | Sunday 29 March 2026 01:06:52 +0000 (0:00:00.527) 0:00:39.681 ********** 2026-03-29 01:09:02.069760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069782 | orchestrator | 2026-03-29 01:09:02.069787 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-29 01:09:02.069792 | orchestrator | Sunday 29 March 2026 01:06:56 +0000 (0:00:03.676) 0:00:43.358 ********** 2026-03-29 01:09:02.069801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:09:02.069806 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.069814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:09:02.069823 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.069831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:09:02.069837 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.069841 | orchestrator | 2026-03-29 01:09:02.069846 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-29 01:09:02.069851 | orchestrator | Sunday 29 March 2026 01:06:58 +0000 (0:00:02.428) 0:00:45.786 ********** 2026-03-29 01:09:02.069857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:09:02.069865 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.069875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:09:02.069881 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.069886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-29 01:09:02.069904 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.069910 | orchestrator | 2026-03-29 01:09:02.069915 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-29 01:09:02.069921 | orchestrator | Sunday 29 March 2026 01:07:02 +0000 (0:00:03.226) 0:00:49.012 ********** 2026-03-29 01:09:02.069926 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.069931 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.069936 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.069941 | orchestrator | 2026-03-29 01:09:02.069947 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-29 01:09:02.069952 | orchestrator | Sunday 29 March 2026 01:07:06 +0000 (0:00:04.089) 0:00:53.102 ********** 2026-03-29 01:09:02.069960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.069989 | orchestrator | 2026-03-29 01:09:02.069994 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-29 01:09:02.069999 | orchestrator | Sunday 29 March 2026 01:07:10 +0000 (0:00:04.640) 0:00:57.742 ********** 2026-03-29 01:09:02.070003 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:02.070008 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:02.070057 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070065 | orchestrator | 2026-03-29 01:09:02.070069 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-29 01:09:02.070075 | orchestrator | Sunday 29 March 2026 01:07:17 +0000 (0:00:06.579) 0:01:04.321 ********** 2026-03-29 01:09:02.070080 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.070085 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.070089 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.070094 | orchestrator | 2026-03-29 01:09:02.070099 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-29 01:09:02.070105 | orchestrator | Sunday 29 March 2026 01:07:24 +0000 (0:00:07.338) 0:01:11.659 ********** 2026-03-29 01:09:02.070110 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.070118 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.070123 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.070128 | orchestrator | 2026-03-29 01:09:02.070134 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-29 01:09:02.070139 | orchestrator | Sunday 29 March 2026 01:07:29 +0000 (0:00:04.315) 0:01:15.975 ********** 2026-03-29 01:09:02.070147 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.070152 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.070157 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.070162 | orchestrator | 2026-03-29 01:09:02.070167 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-29 01:09:02.070172 | orchestrator | Sunday 29 March 2026 01:07:32 +0000 (0:00:03.380) 0:01:19.355 ********** 2026-03-29 01:09:02.070177 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.070182 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.070187 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.070192 | orchestrator | 2026-03-29 01:09:02.070197 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-29 01:09:02.070203 | orchestrator | Sunday 29 March 2026 01:07:36 +0000 (0:00:03.671) 0:01:23.026 ********** 2026-03-29 01:09:02.070208 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.070213 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.070218 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.070223 | orchestrator | 2026-03-29 01:09:02.070228 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-29 01:09:02.070233 | orchestrator | Sunday 29 March 2026 01:07:36 +0000 (0:00:00.339) 0:01:23.366 ********** 2026-03-29 01:09:02.070238 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 01:09:02.070244 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.070249 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 01:09:02.070254 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.070259 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-29 01:09:02.070265 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.070270 | orchestrator | 2026-03-29 01:09:02.070275 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-29 01:09:02.070280 | orchestrator | Sunday 29 March 2026 01:07:43 +0000 (0:00:06.935) 0:01:30.302 ********** 2026-03-29 01:09:02.070285 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070290 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:02.070295 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:02.070300 | orchestrator | 2026-03-29 01:09:02.070305 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-29 01:09:02.070310 | orchestrator | Sunday 29 March 2026 01:07:47 +0000 (0:00:04.395) 0:01:34.697 ********** 2026-03-29 01:09:02.070320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.070335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.070343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-29 01:09:02.070348 | orchestrator | 2026-03-29 01:09:02.070353 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-29 01:09:02.070358 | orchestrator | Sunday 29 March 2026 01:07:51 +0000 (0:00:03.308) 0:01:38.006 ********** 2026-03-29 01:09:02.070363 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:02.070371 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:02.070376 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:02.070381 | orchestrator | 2026-03-29 01:09:02.070386 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-29 01:09:02.070391 | orchestrator | Sunday 29 March 2026 01:07:51 +0000 (0:00:00.282) 0:01:38.288 ********** 2026-03-29 01:09:02.070396 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070400 | orchestrator | 2026-03-29 01:09:02.070405 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-29 01:09:02.070409 | orchestrator | Sunday 29 March 2026 01:07:53 +0000 (0:00:01.824) 0:01:40.112 ********** 2026-03-29 01:09:02.070414 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070419 | orchestrator | 2026-03-29 01:09:02.070423 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-29 01:09:02.070428 | orchestrator | Sunday 29 March 2026 01:07:55 +0000 (0:00:02.333) 0:01:42.446 ********** 2026-03-29 01:09:02.070432 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070437 | orchestrator | 2026-03-29 01:09:02.070442 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-29 01:09:02.070446 | orchestrator | Sunday 29 March 2026 01:07:57 +0000 (0:00:02.403) 0:01:44.850 ********** 2026-03-29 01:09:02.070451 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070456 | orchestrator | 2026-03-29 01:09:02.070461 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-29 01:09:02.070469 | orchestrator | Sunday 29 March 2026 01:08:25 +0000 (0:00:27.542) 0:02:12.392 ********** 2026-03-29 01:09:02.070474 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070479 | orchestrator | 2026-03-29 01:09:02.070484 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 01:09:02.070489 | orchestrator | Sunday 29 March 2026 01:08:27 +0000 (0:00:01.752) 0:02:14.144 ********** 2026-03-29 01:09:02.070494 | orchestrator | 2026-03-29 01:09:02.070499 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 01:09:02.070503 | orchestrator | Sunday 29 March 2026 01:08:27 +0000 (0:00:00.263) 0:02:14.408 ********** 2026-03-29 01:09:02.070508 | orchestrator | 2026-03-29 01:09:02.070513 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-29 01:09:02.070517 | orchestrator | Sunday 29 March 2026 01:08:27 +0000 (0:00:00.064) 0:02:14.472 ********** 2026-03-29 01:09:02.070522 | orchestrator | 2026-03-29 01:09:02.070527 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-29 01:09:02.070532 | orchestrator | Sunday 29 March 2026 01:08:27 +0000 (0:00:00.067) 0:02:14.540 ********** 2026-03-29 01:09:02.070536 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:02.070541 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:02.070546 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:02.070551 | orchestrator | 2026-03-29 01:09:02.070555 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:09:02.070561 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-29 01:09:02.070567 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:09:02.070709 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-29 01:09:02.070722 | orchestrator | 2026-03-29 01:09:02.070727 | orchestrator | 2026-03-29 01:09:02.070732 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:09:02.070737 | orchestrator | Sunday 29 March 2026 01:09:00 +0000 (0:00:32.669) 0:02:47.210 ********** 2026-03-29 01:09:02.070742 | orchestrator | =============================================================================== 2026-03-29 01:09:02.070747 | orchestrator | glance : Restart glance-api container ---------------------------------- 32.67s 2026-03-29 01:09:02.070758 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.54s 2026-03-29 01:09:02.070762 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 7.34s 2026-03-29 01:09:02.070767 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.94s 2026-03-29 01:09:02.070772 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.58s 2026-03-29 01:09:02.070777 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.43s 2026-03-29 01:09:02.070782 | orchestrator | glance : Copying over config.json files for services -------------------- 4.64s 2026-03-29 01:09:02.070787 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.40s 2026-03-29 01:09:02.070792 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.32s 2026-03-29 01:09:02.070797 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.12s 2026-03-29 01:09:02.070802 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.09s 2026-03-29 01:09:02.070807 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.79s 2026-03-29 01:09:02.070812 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.68s 2026-03-29 01:09:02.070817 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.67s 2026-03-29 01:09:02.070826 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.63s 2026-03-29 01:09:02.070831 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.38s 2026-03-29 01:09:02.070836 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.38s 2026-03-29 01:09:02.070841 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.32s 2026-03-29 01:09:02.070845 | orchestrator | glance : Check glance containers ---------------------------------------- 3.31s 2026-03-29 01:09:02.070850 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.27s 2026-03-29 01:09:02.070856 | orchestrator | 2026-03-29 01:09:02 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:02.070866 | orchestrator | 2026-03-29 01:09:02 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:02.070871 | orchestrator | 2026-03-29 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:05.127170 | orchestrator | 2026-03-29 01:09:05 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:05.128715 | orchestrator | 2026-03-29 01:09:05 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:05.130106 | orchestrator | 2026-03-29 01:09:05 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:05.131343 | orchestrator | 2026-03-29 01:09:05 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:05.131596 | orchestrator | 2026-03-29 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:08.162111 | orchestrator | 2026-03-29 01:09:08 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:08.162586 | orchestrator | 2026-03-29 01:09:08 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:08.163415 | orchestrator | 2026-03-29 01:09:08 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:08.164458 | orchestrator | 2026-03-29 01:09:08 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:08.164489 | orchestrator | 2026-03-29 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:11.215642 | orchestrator | 2026-03-29 01:09:11 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:11.216353 | orchestrator | 2026-03-29 01:09:11 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:11.217536 | orchestrator | 2026-03-29 01:09:11 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:11.218859 | orchestrator | 2026-03-29 01:09:11 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:11.218890 | orchestrator | 2026-03-29 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:14.275713 | orchestrator | 2026-03-29 01:09:14 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:14.277281 | orchestrator | 2026-03-29 01:09:14 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:14.278104 | orchestrator | 2026-03-29 01:09:14 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:14.279921 | orchestrator | 2026-03-29 01:09:14 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:14.280031 | orchestrator | 2026-03-29 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:17.323038 | orchestrator | 2026-03-29 01:09:17 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:17.323369 | orchestrator | 2026-03-29 01:09:17 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:17.324438 | orchestrator | 2026-03-29 01:09:17 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:17.325241 | orchestrator | 2026-03-29 01:09:17 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:17.325279 | orchestrator | 2026-03-29 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:20.355650 | orchestrator | 2026-03-29 01:09:20 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:20.358415 | orchestrator | 2026-03-29 01:09:20 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:20.359991 | orchestrator | 2026-03-29 01:09:20 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:20.360902 | orchestrator | 2026-03-29 01:09:20 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:20.360940 | orchestrator | 2026-03-29 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:23.402496 | orchestrator | 2026-03-29 01:09:23 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:23.402961 | orchestrator | 2026-03-29 01:09:23 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state STARTED 2026-03-29 01:09:23.403763 | orchestrator | 2026-03-29 01:09:23 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:23.405793 | orchestrator | 2026-03-29 01:09:23 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:23.405832 | orchestrator | 2026-03-29 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:26.439405 | orchestrator | 2026-03-29 01:09:26 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:26.443556 | orchestrator | 2026-03-29 01:09:26 | INFO  | Task aaa38497-a40a-42cd-b59f-02ffc4ee3816 is in state SUCCESS 2026-03-29 01:09:26.446066 | orchestrator | 2026-03-29 01:09:26.446110 | orchestrator | 2026-03-29 01:09:26.446117 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:09:26.446123 | orchestrator | 2026-03-29 01:09:26.446128 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:09:26.446146 | orchestrator | Sunday 29 March 2026 01:06:36 +0000 (0:00:00.268) 0:00:00.268 ********** 2026-03-29 01:09:26.446151 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:09:26.446156 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:09:26.446161 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:09:26.446165 | orchestrator | 2026-03-29 01:09:26.446169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:09:26.446173 | orchestrator | Sunday 29 March 2026 01:06:36 +0000 (0:00:00.336) 0:00:00.605 ********** 2026-03-29 01:09:26.446178 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-29 01:09:26.446188 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-29 01:09:26.446192 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-29 01:09:26.446201 | orchestrator | 2026-03-29 01:09:26.446205 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-29 01:09:26.446209 | orchestrator | 2026-03-29 01:09:26.446213 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:09:26.446216 | orchestrator | Sunday 29 March 2026 01:06:36 +0000 (0:00:00.502) 0:00:01.107 ********** 2026-03-29 01:09:26.446242 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:26.446280 | orchestrator | 2026-03-29 01:09:26.446284 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-29 01:09:26.446288 | orchestrator | Sunday 29 March 2026 01:06:37 +0000 (0:00:00.535) 0:00:01.643 ********** 2026-03-29 01:09:26.446292 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-29 01:09:26.446296 | orchestrator | 2026-03-29 01:09:26.446443 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-29 01:09:26.446460 | orchestrator | Sunday 29 March 2026 01:06:40 +0000 (0:00:03.168) 0:00:04.812 ********** 2026-03-29 01:09:26.446465 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-29 01:09:26.446469 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-29 01:09:26.446473 | orchestrator | 2026-03-29 01:09:26.446477 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-29 01:09:26.446481 | orchestrator | Sunday 29 March 2026 01:06:47 +0000 (0:00:06.397) 0:00:11.209 ********** 2026-03-29 01:09:26.446485 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:09:26.446489 | orchestrator | 2026-03-29 01:09:26.446493 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-29 01:09:26.446497 | orchestrator | Sunday 29 March 2026 01:06:50 +0000 (0:00:03.428) 0:00:14.637 ********** 2026-03-29 01:09:26.446501 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:09:26.446505 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-29 01:09:26.446508 | orchestrator | 2026-03-29 01:09:26.446512 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-29 01:09:26.446516 | orchestrator | Sunday 29 March 2026 01:06:54 +0000 (0:00:03.638) 0:00:18.276 ********** 2026-03-29 01:09:26.446520 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:09:26.446524 | orchestrator | 2026-03-29 01:09:26.446689 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-29 01:09:26.446724 | orchestrator | Sunday 29 March 2026 01:06:56 +0000 (0:00:02.821) 0:00:21.098 ********** 2026-03-29 01:09:26.446728 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-29 01:09:26.446732 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-29 01:09:26.446736 | orchestrator | 2026-03-29 01:09:26.446740 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-29 01:09:26.446744 | orchestrator | Sunday 29 March 2026 01:07:03 +0000 (0:00:06.318) 0:00:27.416 ********** 2026-03-29 01:09:26.446763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.446795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.446801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.446805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.446865 | orchestrator | 2026-03-29 01:09:26.446869 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:09:26.446873 | orchestrator | Sunday 29 March 2026 01:07:05 +0000 (0:00:02.221) 0:00:29.638 ********** 2026-03-29 01:09:26.446877 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.446881 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:26.446885 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:26.446888 | orchestrator | 2026-03-29 01:09:26.446892 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:09:26.446896 | orchestrator | Sunday 29 March 2026 01:07:05 +0000 (0:00:00.377) 0:00:30.015 ********** 2026-03-29 01:09:26.446900 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:26.446904 | orchestrator | 2026-03-29 01:09:26.446919 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-29 01:09:26.446923 | orchestrator | Sunday 29 March 2026 01:07:06 +0000 (0:00:00.634) 0:00:30.650 ********** 2026-03-29 01:09:26.446927 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-29 01:09:26.446932 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-29 01:09:26.446936 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-29 01:09:26.446939 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-29 01:09:26.446943 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-29 01:09:26.446947 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-29 01:09:26.446952 | orchestrator | 2026-03-29 01:09:26.446955 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-29 01:09:26.446959 | orchestrator | Sunday 29 March 2026 01:07:08 +0000 (0:00:02.098) 0:00:32.748 ********** 2026-03-29 01:09:26.446964 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:09:26.446969 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:09:26.446978 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:09:26.446982 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:09:26.446997 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:09:26.447002 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-29 01:09:26.447006 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:09:26.447013 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:09:26.447019 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:09:26.447034 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:09:26.447039 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:09:26.447043 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-29 01:09:26.447049 | orchestrator | 2026-03-29 01:09:26.447053 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-29 01:09:26.447057 | orchestrator | Sunday 29 March 2026 01:07:12 +0000 (0:00:03.590) 0:00:36.339 ********** 2026-03-29 01:09:26.447061 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:26.447065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:26.447069 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-29 01:09:26.447073 | orchestrator | 2026-03-29 01:09:26.447077 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-29 01:09:26.447080 | orchestrator | Sunday 29 March 2026 01:07:14 +0000 (0:00:02.413) 0:00:38.752 ********** 2026-03-29 01:09:26.447084 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-29 01:09:26.447088 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-29 01:09:26.447092 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-29 01:09:26.447096 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:09:26.447099 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:09:26.447105 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-29 01:09:26.447109 | orchestrator | 2026-03-29 01:09:26.447113 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-29 01:09:26.447117 | orchestrator | Sunday 29 March 2026 01:07:17 +0000 (0:00:02.924) 0:00:41.676 ********** 2026-03-29 01:09:26.447121 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-29 01:09:26.447124 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-29 01:09:26.447128 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-29 01:09:26.447132 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-29 01:09:26.447136 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-29 01:09:26.447140 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-29 01:09:26.447144 | orchestrator | 2026-03-29 01:09:26.447148 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-29 01:09:26.447151 | orchestrator | Sunday 29 March 2026 01:07:18 +0000 (0:00:01.331) 0:00:43.010 ********** 2026-03-29 01:09:26.447155 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.447159 | orchestrator | 2026-03-29 01:09:26.447163 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-29 01:09:26.447167 | orchestrator | Sunday 29 March 2026 01:07:19 +0000 (0:00:00.424) 0:00:43.434 ********** 2026-03-29 01:09:26.447171 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.447175 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:26.447188 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:26.447193 | orchestrator | 2026-03-29 01:09:26.447197 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:09:26.447201 | orchestrator | Sunday 29 March 2026 01:07:20 +0000 (0:00:01.092) 0:00:44.527 ********** 2026-03-29 01:09:26.447204 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:09:26.447208 | orchestrator | 2026-03-29 01:09:26.447212 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-29 01:09:26.447216 | orchestrator | Sunday 29 March 2026 01:07:22 +0000 (0:00:01.925) 0:00:46.452 ********** 2026-03-29 01:09:26.447223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447299 | orchestrator | 2026-03-29 01:09:26.447302 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-29 01:09:26.447306 | orchestrator | Sunday 29 March 2026 01:07:27 +0000 (0:00:04.959) 0:00:51.412 ********** 2026-03-29 01:09:26.447310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447336 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:26.447340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447358 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:26.447362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447386 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.447390 | orchestrator | 2026-03-29 01:09:26.447394 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-29 01:09:26.447399 | orchestrator | Sunday 29 March 2026 01:07:28 +0000 (0:00:00.824) 0:00:52.236 ********** 2026-03-29 01:09:26.447405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447439 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.447446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447460 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:26.447465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447483 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:26.447487 | orchestrator | 2026-03-29 01:09:26.447492 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-29 01:09:26.447500 | orchestrator | Sunday 29 March 2026 01:07:29 +0000 (0:00:01.205) 0:00:53.441 ********** 2026-03-29 01:09:26.447505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447596 | orchestrator | 2026-03-29 01:09:26.447603 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-29 01:09:26.447610 | orchestrator | Sunday 29 March 2026 01:07:33 +0000 (0:00:03.885) 0:00:57.326 ********** 2026-03-29 01:09:26.447614 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 01:09:26.447621 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 01:09:26.447626 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-29 01:09:26.447630 | orchestrator | 2026-03-29 01:09:26.447634 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-29 01:09:26.447639 | orchestrator | Sunday 29 March 2026 01:07:35 +0000 (0:00:01.925) 0:00:59.252 ********** 2026-03-29 01:09:26.447644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447715 | orchestrator | 2026-03-29 01:09:26.447720 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-29 01:09:26.447724 | orchestrator | Sunday 29 March 2026 01:07:49 +0000 (0:00:14.691) 0:01:13.944 ********** 2026-03-29 01:09:26.447728 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.447733 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:26.447737 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:26.447742 | orchestrator | 2026-03-29 01:09:26.447746 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-29 01:09:26.447749 | orchestrator | Sunday 29 March 2026 01:07:51 +0000 (0:00:01.628) 0:01:15.572 ********** 2026-03-29 01:09:26.447759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447782 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.447786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447806 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:26.447812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-29 01:09:26.447816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-29 01:09:26.447831 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:26.447835 | orchestrator | 2026-03-29 01:09:26.447838 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-29 01:09:26.447842 | orchestrator | Sunday 29 March 2026 01:07:52 +0000 (0:00:00.591) 0:01:16.163 ********** 2026-03-29 01:09:26.447846 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.447850 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:26.447854 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:26.447858 | orchestrator | 2026-03-29 01:09:26.447862 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-29 01:09:26.447865 | orchestrator | Sunday 29 March 2026 01:07:52 +0000 (0:00:00.296) 0:01:16.460 ********** 2026-03-29 01:09:26.447871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-29 01:09:26.447888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-29 01:09:26.447936 | orchestrator | 2026-03-29 01:09:26.447940 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-29 01:09:26.447944 | orchestrator | Sunday 29 March 2026 01:07:55 +0000 (0:00:02.989) 0:01:19.450 ********** 2026-03-29 01:09:26.447947 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.447951 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:09:26.447955 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:09:26.447959 | orchestrator | 2026-03-29 01:09:26.447962 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-29 01:09:26.447966 | orchestrator | Sunday 29 March 2026 01:07:55 +0000 (0:00:00.525) 0:01:19.976 ********** 2026-03-29 01:09:26.447970 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.447974 | orchestrator | 2026-03-29 01:09:26.447977 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-29 01:09:26.447981 | orchestrator | Sunday 29 March 2026 01:07:58 +0000 (0:00:02.307) 0:01:22.283 ********** 2026-03-29 01:09:26.447985 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.447989 | orchestrator | 2026-03-29 01:09:26.447993 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-29 01:09:26.447998 | orchestrator | Sunday 29 March 2026 01:08:00 +0000 (0:00:02.572) 0:01:24.856 ********** 2026-03-29 01:09:26.448002 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.448006 | orchestrator | 2026-03-29 01:09:26.448010 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 01:09:26.448016 | orchestrator | Sunday 29 March 2026 01:08:19 +0000 (0:00:18.293) 0:01:43.149 ********** 2026-03-29 01:09:26.448020 | orchestrator | 2026-03-29 01:09:26.448024 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 01:09:26.448028 | orchestrator | Sunday 29 March 2026 01:08:19 +0000 (0:00:00.060) 0:01:43.210 ********** 2026-03-29 01:09:26.448031 | orchestrator | 2026-03-29 01:09:26.448035 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-29 01:09:26.448039 | orchestrator | Sunday 29 March 2026 01:08:19 +0000 (0:00:00.060) 0:01:43.270 ********** 2026-03-29 01:09:26.448043 | orchestrator | 2026-03-29 01:09:26.448046 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-29 01:09:26.448050 | orchestrator | Sunday 29 March 2026 01:08:19 +0000 (0:00:00.061) 0:01:43.332 ********** 2026-03-29 01:09:26.448054 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.448058 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:26.448062 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:26.448065 | orchestrator | 2026-03-29 01:09:26.448069 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-29 01:09:26.448073 | orchestrator | Sunday 29 March 2026 01:08:42 +0000 (0:00:23.671) 0:02:07.004 ********** 2026-03-29 01:09:26.448077 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:26.448080 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:26.448084 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.448088 | orchestrator | 2026-03-29 01:09:26.448092 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-29 01:09:26.448095 | orchestrator | Sunday 29 March 2026 01:08:53 +0000 (0:00:10.635) 0:02:17.639 ********** 2026-03-29 01:09:26.448099 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.448103 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:26.448107 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:26.448111 | orchestrator | 2026-03-29 01:09:26.448114 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-29 01:09:26.448118 | orchestrator | Sunday 29 March 2026 01:09:16 +0000 (0:00:23.322) 0:02:40.961 ********** 2026-03-29 01:09:26.448122 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:09:26.448126 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:09:26.448129 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:09:26.448133 | orchestrator | 2026-03-29 01:09:26.448137 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-29 01:09:26.448141 | orchestrator | Sunday 29 March 2026 01:09:23 +0000 (0:00:06.422) 0:02:47.384 ********** 2026-03-29 01:09:26.448145 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:09:26.448148 | orchestrator | 2026-03-29 01:09:26.448152 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:09:26.448156 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:09:26.448160 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:09:26.448164 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:09:26.448168 | orchestrator | 2026-03-29 01:09:26.448172 | orchestrator | 2026-03-29 01:09:26.448175 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:09:26.448179 | orchestrator | Sunday 29 March 2026 01:09:23 +0000 (0:00:00.237) 0:02:47.622 ********** 2026-03-29 01:09:26.448183 | orchestrator | =============================================================================== 2026-03-29 01:09:26.448188 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.67s 2026-03-29 01:09:26.448195 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.32s 2026-03-29 01:09:26.448203 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.29s 2026-03-29 01:09:26.448218 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.69s 2026-03-29 01:09:26.448225 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.64s 2026-03-29 01:09:26.448235 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.42s 2026-03-29 01:09:26.448241 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.40s 2026-03-29 01:09:26.448248 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.32s 2026-03-29 01:09:26.448254 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.96s 2026-03-29 01:09:26.448261 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.89s 2026-03-29 01:09:26.448346 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.64s 2026-03-29 01:09:26.448357 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.59s 2026-03-29 01:09:26.448362 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.43s 2026-03-29 01:09:26.448366 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.17s 2026-03-29 01:09:26.448369 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.99s 2026-03-29 01:09:26.448373 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.92s 2026-03-29 01:09:26.448377 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.82s 2026-03-29 01:09:26.448384 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.57s 2026-03-29 01:09:26.448388 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.41s 2026-03-29 01:09:26.448392 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.31s 2026-03-29 01:09:26.448395 | orchestrator | 2026-03-29 01:09:26 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:26.448399 | orchestrator | 2026-03-29 01:09:26 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:26.448403 | orchestrator | 2026-03-29 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:29.484965 | orchestrator | 2026-03-29 01:09:29 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:29.485074 | orchestrator | 2026-03-29 01:09:29 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:29.485951 | orchestrator | 2026-03-29 01:09:29 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:29.486058 | orchestrator | 2026-03-29 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:32.516655 | orchestrator | 2026-03-29 01:09:32 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:32.517958 | orchestrator | 2026-03-29 01:09:32 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:32.518929 | orchestrator | 2026-03-29 01:09:32 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:32.518990 | orchestrator | 2026-03-29 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:35.556395 | orchestrator | 2026-03-29 01:09:35 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:35.558460 | orchestrator | 2026-03-29 01:09:35 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:35.560549 | orchestrator | 2026-03-29 01:09:35 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:35.560610 | orchestrator | 2026-03-29 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:38.603832 | orchestrator | 2026-03-29 01:09:38 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:38.603913 | orchestrator | 2026-03-29 01:09:38 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:38.604402 | orchestrator | 2026-03-29 01:09:38 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:38.604437 | orchestrator | 2026-03-29 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:41.647080 | orchestrator | 2026-03-29 01:09:41 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:41.648200 | orchestrator | 2026-03-29 01:09:41 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:41.649387 | orchestrator | 2026-03-29 01:09:41 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:41.649436 | orchestrator | 2026-03-29 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:44.691018 | orchestrator | 2026-03-29 01:09:44 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:44.693136 | orchestrator | 2026-03-29 01:09:44 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:44.694221 | orchestrator | 2026-03-29 01:09:44 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:44.694276 | orchestrator | 2026-03-29 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:47.738134 | orchestrator | 2026-03-29 01:09:47 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:47.738189 | orchestrator | 2026-03-29 01:09:47 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:47.739075 | orchestrator | 2026-03-29 01:09:47 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:47.739102 | orchestrator | 2026-03-29 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:50.783891 | orchestrator | 2026-03-29 01:09:50 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:50.786284 | orchestrator | 2026-03-29 01:09:50 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:50.788617 | orchestrator | 2026-03-29 01:09:50 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:50.788676 | orchestrator | 2026-03-29 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:53.826157 | orchestrator | 2026-03-29 01:09:53 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:53.829074 | orchestrator | 2026-03-29 01:09:53 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:53.830746 | orchestrator | 2026-03-29 01:09:53 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:53.831199 | orchestrator | 2026-03-29 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:56.882491 | orchestrator | 2026-03-29 01:09:56 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:56.884270 | orchestrator | 2026-03-29 01:09:56 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:56.886753 | orchestrator | 2026-03-29 01:09:56 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:56.886809 | orchestrator | 2026-03-29 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:09:59.933769 | orchestrator | 2026-03-29 01:09:59 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:09:59.935731 | orchestrator | 2026-03-29 01:09:59 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:09:59.937671 | orchestrator | 2026-03-29 01:09:59 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:09:59.937708 | orchestrator | 2026-03-29 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:02.982285 | orchestrator | 2026-03-29 01:10:02 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:02.983560 | orchestrator | 2026-03-29 01:10:02 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:02.985219 | orchestrator | 2026-03-29 01:10:02 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:02.985261 | orchestrator | 2026-03-29 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:06.035604 | orchestrator | 2026-03-29 01:10:06 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:06.037988 | orchestrator | 2026-03-29 01:10:06 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:06.046405 | orchestrator | 2026-03-29 01:10:06 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:06.046463 | orchestrator | 2026-03-29 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:09.080895 | orchestrator | 2026-03-29 01:10:09 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:09.082306 | orchestrator | 2026-03-29 01:10:09 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:09.084092 | orchestrator | 2026-03-29 01:10:09 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:09.084143 | orchestrator | 2026-03-29 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:12.124850 | orchestrator | 2026-03-29 01:10:12 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:12.125809 | orchestrator | 2026-03-29 01:10:12 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:12.126849 | orchestrator | 2026-03-29 01:10:12 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:12.126871 | orchestrator | 2026-03-29 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:15.172829 | orchestrator | 2026-03-29 01:10:15 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:15.174692 | orchestrator | 2026-03-29 01:10:15 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:15.176401 | orchestrator | 2026-03-29 01:10:15 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:15.176468 | orchestrator | 2026-03-29 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:18.217975 | orchestrator | 2026-03-29 01:10:18 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:18.219662 | orchestrator | 2026-03-29 01:10:18 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:18.221528 | orchestrator | 2026-03-29 01:10:18 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:18.221585 | orchestrator | 2026-03-29 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:21.263992 | orchestrator | 2026-03-29 01:10:21 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:21.266618 | orchestrator | 2026-03-29 01:10:21 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:21.268692 | orchestrator | 2026-03-29 01:10:21 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:21.268755 | orchestrator | 2026-03-29 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:24.313137 | orchestrator | 2026-03-29 01:10:24 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:24.314140 | orchestrator | 2026-03-29 01:10:24 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:24.315744 | orchestrator | 2026-03-29 01:10:24 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:24.315832 | orchestrator | 2026-03-29 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:27.361012 | orchestrator | 2026-03-29 01:10:27 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:27.364242 | orchestrator | 2026-03-29 01:10:27 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:27.365392 | orchestrator | 2026-03-29 01:10:27 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:27.365522 | orchestrator | 2026-03-29 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:30.411864 | orchestrator | 2026-03-29 01:10:30 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:30.412371 | orchestrator | 2026-03-29 01:10:30 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:30.413694 | orchestrator | 2026-03-29 01:10:30 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:30.413742 | orchestrator | 2026-03-29 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:33.451990 | orchestrator | 2026-03-29 01:10:33 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:33.453837 | orchestrator | 2026-03-29 01:10:33 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:33.454297 | orchestrator | 2026-03-29 01:10:33 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:33.454368 | orchestrator | 2026-03-29 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:36.509486 | orchestrator | 2026-03-29 01:10:36 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:36.511873 | orchestrator | 2026-03-29 01:10:36 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:36.513808 | orchestrator | 2026-03-29 01:10:36 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:36.513938 | orchestrator | 2026-03-29 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:39.556036 | orchestrator | 2026-03-29 01:10:39 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:39.558890 | orchestrator | 2026-03-29 01:10:39 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:39.560954 | orchestrator | 2026-03-29 01:10:39 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:39.561018 | orchestrator | 2026-03-29 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:42.601994 | orchestrator | 2026-03-29 01:10:42 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:42.602900 | orchestrator | 2026-03-29 01:10:42 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:42.604407 | orchestrator | 2026-03-29 01:10:42 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:42.604484 | orchestrator | 2026-03-29 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:45.647796 | orchestrator | 2026-03-29 01:10:45 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:45.649037 | orchestrator | 2026-03-29 01:10:45 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:45.651996 | orchestrator | 2026-03-29 01:10:45 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:45.652045 | orchestrator | 2026-03-29 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:48.691760 | orchestrator | 2026-03-29 01:10:48 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:48.696806 | orchestrator | 2026-03-29 01:10:48 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:48.700365 | orchestrator | 2026-03-29 01:10:48 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:48.700531 | orchestrator | 2026-03-29 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:51.741140 | orchestrator | 2026-03-29 01:10:51 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:51.742890 | orchestrator | 2026-03-29 01:10:51 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:51.745587 | orchestrator | 2026-03-29 01:10:51 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:51.745623 | orchestrator | 2026-03-29 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:54.787558 | orchestrator | 2026-03-29 01:10:54 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:54.789268 | orchestrator | 2026-03-29 01:10:54 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:54.792652 | orchestrator | 2026-03-29 01:10:54 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:54.792713 | orchestrator | 2026-03-29 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:10:57.842857 | orchestrator | 2026-03-29 01:10:57 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:10:57.845627 | orchestrator | 2026-03-29 01:10:57 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:10:57.848478 | orchestrator | 2026-03-29 01:10:57 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:10:57.848547 | orchestrator | 2026-03-29 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:00.897200 | orchestrator | 2026-03-29 01:11:00 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:11:00.897558 | orchestrator | 2026-03-29 01:11:00 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:00.899212 | orchestrator | 2026-03-29 01:11:00 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:11:00.899680 | orchestrator | 2026-03-29 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:03.944273 | orchestrator | 2026-03-29 01:11:03 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:11:03.947445 | orchestrator | 2026-03-29 01:11:03 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:03.947972 | orchestrator | 2026-03-29 01:11:03 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:11:03.948017 | orchestrator | 2026-03-29 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:06.998098 | orchestrator | 2026-03-29 01:11:07 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:11:06.999339 | orchestrator | 2026-03-29 01:11:07 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:07.001051 | orchestrator | 2026-03-29 01:11:07 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:11:07.001079 | orchestrator | 2026-03-29 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:10.042414 | orchestrator | 2026-03-29 01:11:10 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state STARTED 2026-03-29 01:11:10.043362 | orchestrator | 2026-03-29 01:11:10 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:10.045150 | orchestrator | 2026-03-29 01:11:10 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:11:10.046810 | orchestrator | 2026-03-29 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:13.079810 | orchestrator | 2026-03-29 01:11:13 | INFO  | Task cf51d1db-81ce-42ec-8020-9912092c5911 is in state SUCCESS 2026-03-29 01:11:13.079864 | orchestrator | 2026-03-29 01:11:13 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:13.081157 | orchestrator | 2026-03-29 01:11:13 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:13.082136 | orchestrator | 2026-03-29 01:11:13 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:11:13.082283 | orchestrator | 2026-03-29 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:16.124102 | orchestrator | 2026-03-29 01:11:16 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:16.124151 | orchestrator | 2026-03-29 01:11:16 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:16.125538 | orchestrator | 2026-03-29 01:11:16 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state STARTED 2026-03-29 01:11:16.125838 | orchestrator | 2026-03-29 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:19.163894 | orchestrator | 2026-03-29 01:11:19 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:19.164617 | orchestrator | 2026-03-29 01:11:19 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:19.166399 | orchestrator | 2026-03-29 01:11:19 | INFO  | Task 3fbf47ea-ff20-4790-b895-9f0eb2a28be4 is in state SUCCESS 2026-03-29 01:11:19.168594 | orchestrator | 2026-03-29 01:11:19.168643 | orchestrator | 2026-03-29 01:11:19.168654 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:11:19.168662 | orchestrator | 2026-03-29 01:11:19.168670 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:11:19.168678 | orchestrator | Sunday 29 March 2026 01:08:07 +0000 (0:00:00.174) 0:00:00.174 ********** 2026-03-29 01:11:19.168686 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:11:19.168739 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:11:19.168748 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:11:19.168756 | orchestrator | 2026-03-29 01:11:19.168763 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:11:19.168771 | orchestrator | Sunday 29 March 2026 01:08:07 +0000 (0:00:00.300) 0:00:00.474 ********** 2026-03-29 01:11:19.168779 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-29 01:11:19.168802 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-29 01:11:19.168826 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-29 01:11:19.168834 | orchestrator | 2026-03-29 01:11:19.168841 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-29 01:11:19.168848 | orchestrator | 2026-03-29 01:11:19.168855 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-29 01:11:19.168863 | orchestrator | Sunday 29 March 2026 01:08:08 +0000 (0:00:00.642) 0:00:01.117 ********** 2026-03-29 01:11:19.168912 | orchestrator | 2026-03-29 01:11:19.168919 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-29 01:11:19.168927 | orchestrator | 2026-03-29 01:11:19.168934 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-03-29 01:11:19.168941 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:11:19.168948 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:11:19.168971 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:11:19.168977 | orchestrator | 2026-03-29 01:11:19.168984 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:11:19.168991 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:11:19.169000 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:11:19.169007 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:11:19.169014 | orchestrator | 2026-03-29 01:11:19.169021 | orchestrator | 2026-03-29 01:11:19.169028 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:11:19.169035 | orchestrator | Sunday 29 March 2026 01:11:10 +0000 (0:03:02.753) 0:03:03.870 ********** 2026-03-29 01:11:19.169042 | orchestrator | =============================================================================== 2026-03-29 01:11:19.169049 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 182.75s 2026-03-29 01:11:19.169065 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-03-29 01:11:19.169072 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-29 01:11:19.169079 | orchestrator | 2026-03-29 01:11:19.169086 | orchestrator | 2026-03-29 01:11:19.169093 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:11:19.169100 | orchestrator | 2026-03-29 01:11:19.169107 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:11:19.169114 | orchestrator | Sunday 29 March 2026 01:09:04 +0000 (0:00:00.265) 0:00:00.265 ********** 2026-03-29 01:11:19.169121 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:11:19.169127 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:11:19.169134 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:11:19.169141 | orchestrator | 2026-03-29 01:11:19.169148 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:11:19.169156 | orchestrator | Sunday 29 March 2026 01:09:04 +0000 (0:00:00.252) 0:00:00.518 ********** 2026-03-29 01:11:19.169163 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-29 01:11:19.169170 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-29 01:11:19.169177 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-29 01:11:19.169184 | orchestrator | 2026-03-29 01:11:19.169191 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-29 01:11:19.169198 | orchestrator | 2026-03-29 01:11:19.169205 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-29 01:11:19.169213 | orchestrator | Sunday 29 March 2026 01:09:05 +0000 (0:00:00.362) 0:00:00.881 ********** 2026-03-29 01:11:19.169220 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:11:19.169227 | orchestrator | 2026-03-29 01:11:19.169234 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-29 01:11:19.169247 | orchestrator | Sunday 29 March 2026 01:09:05 +0000 (0:00:00.474) 0:00:01.355 ********** 2026-03-29 01:11:19.169256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169295 | orchestrator | 2026-03-29 01:11:19.169302 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-29 01:11:19.169310 | orchestrator | Sunday 29 March 2026 01:09:06 +0000 (0:00:00.610) 0:00:01.966 ********** 2026-03-29 01:11:19.169318 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-29 01:11:19.169326 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-29 01:11:19.169334 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:11:19.169342 | orchestrator | 2026-03-29 01:11:19.169350 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-29 01:11:19.169387 | orchestrator | Sunday 29 March 2026 01:09:07 +0000 (0:00:00.776) 0:00:02.743 ********** 2026-03-29 01:11:19.169395 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:11:19.169402 | orchestrator | 2026-03-29 01:11:19.169412 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-29 01:11:19.169420 | orchestrator | Sunday 29 March 2026 01:09:07 +0000 (0:00:00.679) 0:00:03.422 ********** 2026-03-29 01:11:19.169427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169462 | orchestrator | 2026-03-29 01:11:19.169469 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-29 01:11:19.169476 | orchestrator | Sunday 29 March 2026 01:09:09 +0000 (0:00:01.319) 0:00:04.741 ********** 2026-03-29 01:11:19.169483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:11:19.169491 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:11:19.169499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:11:19.169507 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:11:19.169517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:11:19.169524 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:11:19.169535 | orchestrator | 2026-03-29 01:11:19.169542 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-29 01:11:19.169549 | orchestrator | Sunday 29 March 2026 01:09:09 +0000 (0:00:00.364) 0:00:05.106 ********** 2026-03-29 01:11:19.169556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:11:19.169562 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:11:19.169568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:11:19.169574 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:11:19.169584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-29 01:11:19.169590 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:11:19.169596 | orchestrator | 2026-03-29 01:11:19.169602 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-29 01:11:19.169608 | orchestrator | Sunday 29 March 2026 01:09:10 +0000 (0:00:00.887) 0:00:05.993 ********** 2026-03-29 01:11:19.169614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169640 | orchestrator | 2026-03-29 01:11:19.169646 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-29 01:11:19.169653 | orchestrator | Sunday 29 March 2026 01:09:11 +0000 (0:00:01.277) 0:00:07.271 ********** 2026-03-29 01:11:19.169660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.169685 | orchestrator | 2026-03-29 01:11:19.169691 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-29 01:11:19.169697 | orchestrator | Sunday 29 March 2026 01:09:12 +0000 (0:00:01.290) 0:00:08.562 ********** 2026-03-29 01:11:19.169703 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:11:19.169711 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:11:19.169718 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:11:19.169724 | orchestrator | 2026-03-29 01:11:19.169730 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-29 01:11:19.169737 | orchestrator | Sunday 29 March 2026 01:09:13 +0000 (0:00:00.474) 0:00:09.037 ********** 2026-03-29 01:11:19.169743 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 01:11:19.169758 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 01:11:19.169764 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-29 01:11:19.169770 | orchestrator | 2026-03-29 01:11:19.169776 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-29 01:11:19.169781 | orchestrator | Sunday 29 March 2026 01:09:14 +0000 (0:00:01.220) 0:00:10.258 ********** 2026-03-29 01:11:19.169789 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 01:11:19.169795 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 01:11:19.169800 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-29 01:11:19.169806 | orchestrator | 2026-03-29 01:11:19.169812 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-29 01:11:19.169817 | orchestrator | Sunday 29 March 2026 01:09:15 +0000 (0:00:01.218) 0:00:11.477 ********** 2026-03-29 01:11:19.169823 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:11:19.169828 | orchestrator | 2026-03-29 01:11:19.169834 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-29 01:11:19.169840 | orchestrator | Sunday 29 March 2026 01:09:16 +0000 (0:00:00.740) 0:00:12.217 ********** 2026-03-29 01:11:19.169846 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-29 01:11:19.169852 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-29 01:11:19.169858 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:11:19.169864 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:11:19.169871 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:11:19.169877 | orchestrator | 2026-03-29 01:11:19.169884 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-29 01:11:19.169891 | orchestrator | Sunday 29 March 2026 01:09:17 +0000 (0:00:00.653) 0:00:12.871 ********** 2026-03-29 01:11:19.169897 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:11:19.169904 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:11:19.169910 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:11:19.169916 | orchestrator | 2026-03-29 01:11:19.169923 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-29 01:11:19.169930 | orchestrator | Sunday 29 March 2026 01:09:17 +0000 (0:00:00.684) 0:00:13.556 ********** 2026-03-29 01:11:19.169937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1311573, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1785429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.169949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1311573, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1785429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.169957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1311573, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1785429, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.169971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1311945, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.328201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.169981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1311945, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.328201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.169988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1311945, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.328201, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.169995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1311596, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.180535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1311596, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.180535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1311596, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.180535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1311948, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3308036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1311948, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3308036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1311948, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3308036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1311618, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1853073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1311618, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1853073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1311618, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1853073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1311657, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1949492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1311657, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1949492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1311657, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1949492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1311570, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1776354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1311570, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1776354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1311586, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1794848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1311570, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1776354, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1311586, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1794848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1311598, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.181332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1311586, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1794848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1311598, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.181332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1311622, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1864104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1311598, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.181332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1311622, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1864104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1311622, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1864104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1311659, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1962516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1311659, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1962516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1311591, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.180535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1311659, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1962516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1311591, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.180535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1311626, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1938972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1311591, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.180535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1311626, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1938972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1311619, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1858253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1311626, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1938972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1311615, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1853073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1311619, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1858253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1311619, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1858253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1311606, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1840477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1311615, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1853073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1311623, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.186643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1311615, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1853073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1311606, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1840477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1311600, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1820974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1311606, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1840477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1311658, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.196053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1311623, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.186643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1311623, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.186643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312123, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3776782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1311600, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1820974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1311600, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.1820974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1311983, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.337786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1311658, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.196053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1311971, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3333287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1311658, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.196053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312123, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3776782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1311995, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3401399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1312123, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3776782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1311983, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.337786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1311961, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.331444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1311983, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.337786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1311971, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3333287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312027, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3500216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1311971, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3333287, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1311995, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3401399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.170994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1311997, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.344786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infra2026-03-29 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:19.171059 | orchestrator | structure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1311995, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3401399, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1311961, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.331444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1312032, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.352591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1311961, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.331444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312027, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3500216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312098, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3771718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1312027, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3500216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1312025, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3494172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1311997, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.344786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1311997, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.344786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1311993, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.339381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1312032, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.352591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1311978, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.33554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1312032, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.352591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312098, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3771718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1311992, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.338999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1312098, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3771718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1311974, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3343747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1312025, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3494172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1311994, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.339799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1312025, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3494172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1311993, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.339381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.356361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1311993, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.339381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.353974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1311978, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.33554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1311978, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.33554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1311963, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3317838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1311992, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.338999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1311992, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.338999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1311967, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.333135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1311974, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3343747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1311974, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3343747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1311994, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.339799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312014, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.347786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1311994, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.339799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.356361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1312038, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.352591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1312047, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.356361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.353974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1312042, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.353974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1311963, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3317838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1311963, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.3317838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1311967, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.333135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1311967, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.333135, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312014, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.347786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1312014, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.347786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1312038, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.352591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1312038, 'dev': 98, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774743596.352591, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-29 01:11:19.171505 | orchestrator | 2026-03-29 01:11:19.171510 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-29 01:11:19.171515 | orchestrator | Sunday 29 March 2026 01:09:52 +0000 (0:00:34.769) 0:00:48.325 ********** 2026-03-29 01:11:19.171522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.171528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.171532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-29 01:11:19.171539 | orchestrator | 2026-03-29 01:11:19.171544 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-29 01:11:19.171548 | orchestrator | Sunday 29 March 2026 01:09:53 +0000 (0:00:01.088) 0:00:49.413 ********** 2026-03-29 01:11:19.171554 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:11:19.171561 | orchestrator | 2026-03-29 01:11:19.171571 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-29 01:11:19.171578 | orchestrator | Sunday 29 March 2026 01:09:56 +0000 (0:00:02.955) 0:00:52.369 ********** 2026-03-29 01:11:19.171585 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:11:19.171592 | orchestrator | 2026-03-29 01:11:19.171598 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 01:11:19.171605 | orchestrator | Sunday 29 March 2026 01:09:59 +0000 (0:00:02.580) 0:00:54.949 ********** 2026-03-29 01:11:19.171611 | orchestrator | 2026-03-29 01:11:19.171618 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 01:11:19.171623 | orchestrator | Sunday 29 March 2026 01:09:59 +0000 (0:00:00.058) 0:00:55.008 ********** 2026-03-29 01:11:19.171627 | orchestrator | 2026-03-29 01:11:19.171631 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-29 01:11:19.171635 | orchestrator | Sunday 29 March 2026 01:09:59 +0000 (0:00:00.061) 0:00:55.069 ********** 2026-03-29 01:11:19.171639 | orchestrator | 2026-03-29 01:11:19.171642 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-29 01:11:19.171646 | orchestrator | Sunday 29 March 2026 01:09:59 +0000 (0:00:00.158) 0:00:55.227 ********** 2026-03-29 01:11:19.171650 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:11:19.171654 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:11:19.171658 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:11:19.171661 | orchestrator | 2026-03-29 01:11:19.171665 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-29 01:11:19.171669 | orchestrator | Sunday 29 March 2026 01:10:06 +0000 (0:00:06.694) 0:01:01.922 ********** 2026-03-29 01:11:19.171673 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:11:19.171677 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:11:19.171681 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-29 01:11:19.171685 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-29 01:11:19.171689 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-29 01:11:19.171693 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:11:19.171700 | orchestrator | 2026-03-29 01:11:19.171707 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-29 01:11:19.171713 | orchestrator | Sunday 29 March 2026 01:10:45 +0000 (0:00:39.152) 0:01:41.074 ********** 2026-03-29 01:11:19.171720 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:11:19.171726 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:11:19.171732 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:11:19.171738 | orchestrator | 2026-03-29 01:11:19.171745 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-29 01:11:19.171752 | orchestrator | Sunday 29 March 2026 01:11:13 +0000 (0:00:27.557) 0:02:08.632 ********** 2026-03-29 01:11:19.171759 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:11:19.171770 | orchestrator | 2026-03-29 01:11:19.171774 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-29 01:11:19.171782 | orchestrator | Sunday 29 March 2026 01:11:15 +0000 (0:00:02.066) 0:02:10.699 ********** 2026-03-29 01:11:19.171786 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:11:19.171790 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:11:19.171794 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:11:19.171798 | orchestrator | 2026-03-29 01:11:19.171802 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-29 01:11:19.171806 | orchestrator | Sunday 29 March 2026 01:11:15 +0000 (0:00:00.536) 0:02:11.236 ********** 2026-03-29 01:11:19.171810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-29 01:11:19.171816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-29 01:11:19.171821 | orchestrator | 2026-03-29 01:11:19.171825 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-29 01:11:19.171829 | orchestrator | Sunday 29 March 2026 01:11:17 +0000 (0:00:02.181) 0:02:13.417 ********** 2026-03-29 01:11:19.171833 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:11:19.171837 | orchestrator | 2026-03-29 01:11:19.171840 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:11:19.171844 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:11:19.171849 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:11:19.171852 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:11:19.171856 | orchestrator | 2026-03-29 01:11:19.171860 | orchestrator | 2026-03-29 01:11:19.171864 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:11:19.171868 | orchestrator | Sunday 29 March 2026 01:11:18 +0000 (0:00:00.242) 0:02:13.659 ********** 2026-03-29 01:11:19.171872 | orchestrator | =============================================================================== 2026-03-29 01:11:19.171879 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.15s 2026-03-29 01:11:19.171883 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.77s 2026-03-29 01:11:19.171886 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.56s 2026-03-29 01:11:19.171890 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.69s 2026-03-29 01:11:19.171894 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.96s 2026-03-29 01:11:19.171898 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.58s 2026-03-29 01:11:19.171902 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.18s 2026-03-29 01:11:19.171906 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.07s 2026-03-29 01:11:19.171909 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.32s 2026-03-29 01:11:19.171913 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.29s 2026-03-29 01:11:19.171917 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.28s 2026-03-29 01:11:19.171921 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2026-03-29 01:11:19.171928 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.22s 2026-03-29 01:11:19.171932 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2026-03-29 01:11:19.171936 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.89s 2026-03-29 01:11:19.171940 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.78s 2026-03-29 01:11:19.171944 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2026-03-29 01:11:19.171947 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.68s 2026-03-29 01:11:19.171951 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.68s 2026-03-29 01:11:19.171955 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.65s 2026-03-29 01:11:22.223180 | orchestrator | 2026-03-29 01:11:22 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:22.224303 | orchestrator | 2026-03-29 01:11:22 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:22.224428 | orchestrator | 2026-03-29 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:25.390680 | orchestrator | 2026-03-29 01:11:25 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:25.392134 | orchestrator | 2026-03-29 01:11:25 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:25.392548 | orchestrator | 2026-03-29 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:28.431682 | orchestrator | 2026-03-29 01:11:28 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:28.432292 | orchestrator | 2026-03-29 01:11:28 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:28.432324 | orchestrator | 2026-03-29 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:31.480707 | orchestrator | 2026-03-29 01:11:31 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:31.482212 | orchestrator | 2026-03-29 01:11:31 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:31.482280 | orchestrator | 2026-03-29 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:34.520857 | orchestrator | 2026-03-29 01:11:34 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:34.522933 | orchestrator | 2026-03-29 01:11:34 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:34.523639 | orchestrator | 2026-03-29 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:37.573284 | orchestrator | 2026-03-29 01:11:37 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:37.574569 | orchestrator | 2026-03-29 01:11:37 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:37.574607 | orchestrator | 2026-03-29 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:40.616189 | orchestrator | 2026-03-29 01:11:40 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:40.618112 | orchestrator | 2026-03-29 01:11:40 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:40.618164 | orchestrator | 2026-03-29 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:43.670584 | orchestrator | 2026-03-29 01:11:43 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:43.671417 | orchestrator | 2026-03-29 01:11:43 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:43.671508 | orchestrator | 2026-03-29 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:46.718493 | orchestrator | 2026-03-29 01:11:46 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:46.718640 | orchestrator | 2026-03-29 01:11:46 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:46.718652 | orchestrator | 2026-03-29 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:49.758937 | orchestrator | 2026-03-29 01:11:49 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:49.761930 | orchestrator | 2026-03-29 01:11:49 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:49.762107 | orchestrator | 2026-03-29 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:52.801499 | orchestrator | 2026-03-29 01:11:52 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:52.802181 | orchestrator | 2026-03-29 01:11:52 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:52.802431 | orchestrator | 2026-03-29 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:55.845718 | orchestrator | 2026-03-29 01:11:55 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:55.845956 | orchestrator | 2026-03-29 01:11:55 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:55.846050 | orchestrator | 2026-03-29 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:11:58.881263 | orchestrator | 2026-03-29 01:11:58 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:11:58.886072 | orchestrator | 2026-03-29 01:11:58 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:11:58.886133 | orchestrator | 2026-03-29 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:01.935575 | orchestrator | 2026-03-29 01:12:01 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:01.936035 | orchestrator | 2026-03-29 01:12:01 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:01.936080 | orchestrator | 2026-03-29 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:04.965458 | orchestrator | 2026-03-29 01:12:04 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:04.966907 | orchestrator | 2026-03-29 01:12:04 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:04.966978 | orchestrator | 2026-03-29 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:08.003596 | orchestrator | 2026-03-29 01:12:08 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:08.004240 | orchestrator | 2026-03-29 01:12:08 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:08.004292 | orchestrator | 2026-03-29 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:11.033686 | orchestrator | 2026-03-29 01:12:11 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:11.034484 | orchestrator | 2026-03-29 01:12:11 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:11.034523 | orchestrator | 2026-03-29 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:14.065728 | orchestrator | 2026-03-29 01:12:14 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:14.066182 | orchestrator | 2026-03-29 01:12:14 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:14.066245 | orchestrator | 2026-03-29 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:17.117091 | orchestrator | 2026-03-29 01:12:17 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:17.119164 | orchestrator | 2026-03-29 01:12:17 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:17.119241 | orchestrator | 2026-03-29 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:20.159155 | orchestrator | 2026-03-29 01:12:20 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:20.160323 | orchestrator | 2026-03-29 01:12:20 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:20.160356 | orchestrator | 2026-03-29 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:23.199963 | orchestrator | 2026-03-29 01:12:23 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:23.200377 | orchestrator | 2026-03-29 01:12:23 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:23.200540 | orchestrator | 2026-03-29 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:26.240261 | orchestrator | 2026-03-29 01:12:26 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:26.240965 | orchestrator | 2026-03-29 01:12:26 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:26.241009 | orchestrator | 2026-03-29 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:29.287668 | orchestrator | 2026-03-29 01:12:29 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:29.294296 | orchestrator | 2026-03-29 01:12:29 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:29.294343 | orchestrator | 2026-03-29 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:32.336470 | orchestrator | 2026-03-29 01:12:32 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:32.337262 | orchestrator | 2026-03-29 01:12:32 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:32.337327 | orchestrator | 2026-03-29 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:35.385214 | orchestrator | 2026-03-29 01:12:35 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:35.387681 | orchestrator | 2026-03-29 01:12:35 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:35.387785 | orchestrator | 2026-03-29 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:38.431548 | orchestrator | 2026-03-29 01:12:38 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:38.432994 | orchestrator | 2026-03-29 01:12:38 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:38.434764 | orchestrator | 2026-03-29 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:41.470087 | orchestrator | 2026-03-29 01:12:41 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:41.470801 | orchestrator | 2026-03-29 01:12:41 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:41.470833 | orchestrator | 2026-03-29 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:44.512448 | orchestrator | 2026-03-29 01:12:44 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:44.513442 | orchestrator | 2026-03-29 01:12:44 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:44.513872 | orchestrator | 2026-03-29 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:47.556048 | orchestrator | 2026-03-29 01:12:47 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:47.557672 | orchestrator | 2026-03-29 01:12:47 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:47.557857 | orchestrator | 2026-03-29 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:50.595428 | orchestrator | 2026-03-29 01:12:50 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:50.597162 | orchestrator | 2026-03-29 01:12:50 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:50.597233 | orchestrator | 2026-03-29 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:53.639098 | orchestrator | 2026-03-29 01:12:53 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:53.641466 | orchestrator | 2026-03-29 01:12:53 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:53.641509 | orchestrator | 2026-03-29 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:56.682687 | orchestrator | 2026-03-29 01:12:56 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:56.683991 | orchestrator | 2026-03-29 01:12:56 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:56.684055 | orchestrator | 2026-03-29 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:12:59.728015 | orchestrator | 2026-03-29 01:12:59 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:12:59.729129 | orchestrator | 2026-03-29 01:12:59 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:12:59.729794 | orchestrator | 2026-03-29 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:02.774636 | orchestrator | 2026-03-29 01:13:02 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:02.777095 | orchestrator | 2026-03-29 01:13:02 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:02.777156 | orchestrator | 2026-03-29 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:05.821525 | orchestrator | 2026-03-29 01:13:05 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:05.823154 | orchestrator | 2026-03-29 01:13:05 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:05.823345 | orchestrator | 2026-03-29 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:08.867727 | orchestrator | 2026-03-29 01:13:08 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:08.869960 | orchestrator | 2026-03-29 01:13:08 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:08.870004 | orchestrator | 2026-03-29 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:11.911812 | orchestrator | 2026-03-29 01:13:11 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:11.912989 | orchestrator | 2026-03-29 01:13:11 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:11.913077 | orchestrator | 2026-03-29 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:14.960374 | orchestrator | 2026-03-29 01:13:14 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:14.962798 | orchestrator | 2026-03-29 01:13:14 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:14.962861 | orchestrator | 2026-03-29 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:18.002780 | orchestrator | 2026-03-29 01:13:18 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:18.002988 | orchestrator | 2026-03-29 01:13:18 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:18.003006 | orchestrator | 2026-03-29 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:21.052577 | orchestrator | 2026-03-29 01:13:21 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:21.054045 | orchestrator | 2026-03-29 01:13:21 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:21.055485 | orchestrator | 2026-03-29 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:24.101182 | orchestrator | 2026-03-29 01:13:24 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:24.103884 | orchestrator | 2026-03-29 01:13:24 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:24.103928 | orchestrator | 2026-03-29 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:27.153126 | orchestrator | 2026-03-29 01:13:27 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:27.154718 | orchestrator | 2026-03-29 01:13:27 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:27.154771 | orchestrator | 2026-03-29 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:30.203291 | orchestrator | 2026-03-29 01:13:30 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:30.205634 | orchestrator | 2026-03-29 01:13:30 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:30.205683 | orchestrator | 2026-03-29 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:33.255245 | orchestrator | 2026-03-29 01:13:33 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:33.259678 | orchestrator | 2026-03-29 01:13:33 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:33.259731 | orchestrator | 2026-03-29 01:13:33 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:36.295199 | orchestrator | 2026-03-29 01:13:36 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:36.296561 | orchestrator | 2026-03-29 01:13:36 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:36.296599 | orchestrator | 2026-03-29 01:13:36 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:39.341231 | orchestrator | 2026-03-29 01:13:39 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:39.344105 | orchestrator | 2026-03-29 01:13:39 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:39.344159 | orchestrator | 2026-03-29 01:13:39 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:42.393098 | orchestrator | 2026-03-29 01:13:42 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:42.394922 | orchestrator | 2026-03-29 01:13:42 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:42.395004 | orchestrator | 2026-03-29 01:13:42 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:45.434299 | orchestrator | 2026-03-29 01:13:45 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:45.434672 | orchestrator | 2026-03-29 01:13:45 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:45.434871 | orchestrator | 2026-03-29 01:13:45 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:48.478855 | orchestrator | 2026-03-29 01:13:48 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:48.481030 | orchestrator | 2026-03-29 01:13:48 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:48.481255 | orchestrator | 2026-03-29 01:13:48 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:51.527483 | orchestrator | 2026-03-29 01:13:51 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:51.528114 | orchestrator | 2026-03-29 01:13:51 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:51.528396 | orchestrator | 2026-03-29 01:13:51 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:54.570184 | orchestrator | 2026-03-29 01:13:54 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:54.571929 | orchestrator | 2026-03-29 01:13:54 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:54.572206 | orchestrator | 2026-03-29 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:13:57.602866 | orchestrator | 2026-03-29 01:13:57 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:13:57.603470 | orchestrator | 2026-03-29 01:13:57 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:13:57.603815 | orchestrator | 2026-03-29 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:00.635994 | orchestrator | 2026-03-29 01:14:00 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:00.637626 | orchestrator | 2026-03-29 01:14:00 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:00.637674 | orchestrator | 2026-03-29 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:03.661841 | orchestrator | 2026-03-29 01:14:03 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:03.662863 | orchestrator | 2026-03-29 01:14:03 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:03.662898 | orchestrator | 2026-03-29 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:06.689586 | orchestrator | 2026-03-29 01:14:06 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:06.691217 | orchestrator | 2026-03-29 01:14:06 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:06.691589 | orchestrator | 2026-03-29 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:09.736539 | orchestrator | 2026-03-29 01:14:09 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:09.738281 | orchestrator | 2026-03-29 01:14:09 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:09.738332 | orchestrator | 2026-03-29 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:12.802173 | orchestrator | 2026-03-29 01:14:12 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:12.804612 | orchestrator | 2026-03-29 01:14:12 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:12.806202 | orchestrator | 2026-03-29 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:15.854547 | orchestrator | 2026-03-29 01:14:15 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:15.855452 | orchestrator | 2026-03-29 01:14:15 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:15.855484 | orchestrator | 2026-03-29 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:18.890260 | orchestrator | 2026-03-29 01:14:18 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:18.891159 | orchestrator | 2026-03-29 01:14:18 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:18.891193 | orchestrator | 2026-03-29 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:21.920014 | orchestrator | 2026-03-29 01:14:21 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:21.921650 | orchestrator | 2026-03-29 01:14:21 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:21.921767 | orchestrator | 2026-03-29 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:24.955736 | orchestrator | 2026-03-29 01:14:24 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:24.956661 | orchestrator | 2026-03-29 01:14:24 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:24.956702 | orchestrator | 2026-03-29 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:27.999898 | orchestrator | 2026-03-29 01:14:28 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:28.003596 | orchestrator | 2026-03-29 01:14:28 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:28.003660 | orchestrator | 2026-03-29 01:14:28 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:31.043358 | orchestrator | 2026-03-29 01:14:31 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:31.045101 | orchestrator | 2026-03-29 01:14:31 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:31.045171 | orchestrator | 2026-03-29 01:14:31 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:34.091482 | orchestrator | 2026-03-29 01:14:34 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:34.093196 | orchestrator | 2026-03-29 01:14:34 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:34.093260 | orchestrator | 2026-03-29 01:14:34 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:37.133640 | orchestrator | 2026-03-29 01:14:37 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:37.135106 | orchestrator | 2026-03-29 01:14:37 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:37.135153 | orchestrator | 2026-03-29 01:14:37 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:40.172283 | orchestrator | 2026-03-29 01:14:40 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:40.175196 | orchestrator | 2026-03-29 01:14:40 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:40.175293 | orchestrator | 2026-03-29 01:14:40 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:43.200972 | orchestrator | 2026-03-29 01:14:43 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:43.201125 | orchestrator | 2026-03-29 01:14:43 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:43.201140 | orchestrator | 2026-03-29 01:14:43 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:46.232479 | orchestrator | 2026-03-29 01:14:46 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:46.232632 | orchestrator | 2026-03-29 01:14:46 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:46.232647 | orchestrator | 2026-03-29 01:14:46 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:49.260051 | orchestrator | 2026-03-29 01:14:49 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:49.260806 | orchestrator | 2026-03-29 01:14:49 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:49.260904 | orchestrator | 2026-03-29 01:14:49 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:52.301862 | orchestrator | 2026-03-29 01:14:52 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:52.303330 | orchestrator | 2026-03-29 01:14:52 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:52.303402 | orchestrator | 2026-03-29 01:14:52 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:55.345671 | orchestrator | 2026-03-29 01:14:55 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:55.347583 | orchestrator | 2026-03-29 01:14:55 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:55.347706 | orchestrator | 2026-03-29 01:14:55 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:14:58.392165 | orchestrator | 2026-03-29 01:14:58 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:14:58.393848 | orchestrator | 2026-03-29 01:14:58 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:14:58.393918 | orchestrator | 2026-03-29 01:14:58 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:01.442205 | orchestrator | 2026-03-29 01:15:01 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:15:01.443695 | orchestrator | 2026-03-29 01:15:01 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:15:01.443741 | orchestrator | 2026-03-29 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:04.486190 | orchestrator | 2026-03-29 01:15:04 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:15:04.487734 | orchestrator | 2026-03-29 01:15:04 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:15:04.487829 | orchestrator | 2026-03-29 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:07.532804 | orchestrator | 2026-03-29 01:15:07 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:15:07.535806 | orchestrator | 2026-03-29 01:15:07 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:15:07.535873 | orchestrator | 2026-03-29 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:10.575402 | orchestrator | 2026-03-29 01:15:10 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:15:10.577143 | orchestrator | 2026-03-29 01:15:10 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:15:10.577217 | orchestrator | 2026-03-29 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:13.620373 | orchestrator | 2026-03-29 01:15:13 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:15:13.623926 | orchestrator | 2026-03-29 01:15:13 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:15:13.624232 | orchestrator | 2026-03-29 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:16.670418 | orchestrator | 2026-03-29 01:15:16 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:15:16.671890 | orchestrator | 2026-03-29 01:15:16 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:15:16.671922 | orchestrator | 2026-03-29 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:19.719619 | orchestrator | 2026-03-29 01:15:19 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:15:19.721459 | orchestrator | 2026-03-29 01:15:19 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state STARTED 2026-03-29 01:15:19.721542 | orchestrator | 2026-03-29 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:15:22.759854 | orchestrator | 2026-03-29 01:15:22 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state STARTED 2026-03-29 01:17:22.877026 | orchestrator | 2026-03-29 01:17:22 | INFO  | Task 8972ddd9-e73f-4e62-8f6a-8b17a36a4560 is in state SUCCESS 2026-03-29 01:17:22.881041 | orchestrator | 2026-03-29 01:17:22.881148 | orchestrator | 2026-03-29 01:17:22.881159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:17:22.881167 | orchestrator | 2026-03-29 01:17:22.881175 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-29 01:17:22.881182 | orchestrator | Sunday 29 March 2026 01:07:07 +0000 (0:00:00.276) 0:00:00.276 ********** 2026-03-29 01:17:22.881220 | orchestrator | changed: [testbed-manager] 2026-03-29 01:17:22.881247 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.881254 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.881261 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.881268 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.881275 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.881281 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.881288 | orchestrator | 2026-03-29 01:17:22.881296 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:17:22.881303 | orchestrator | Sunday 29 March 2026 01:07:08 +0000 (0:00:01.041) 0:00:01.318 ********** 2026-03-29 01:17:22.881310 | orchestrator | changed: [testbed-manager] 2026-03-29 01:17:22.881318 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.881325 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.881332 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.881340 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.881347 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.881451 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.881460 | orchestrator | 2026-03-29 01:17:22.881498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:17:22.881506 | orchestrator | Sunday 29 March 2026 01:07:09 +0000 (0:00:00.854) 0:00:02.172 ********** 2026-03-29 01:17:22.881513 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-29 01:17:22.881520 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-29 01:17:22.881527 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-29 01:17:22.881534 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-29 01:17:22.881602 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-29 01:17:22.881613 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-29 01:17:22.881622 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-29 01:17:22.881631 | orchestrator | 2026-03-29 01:17:22.881641 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-29 01:17:22.881652 | orchestrator | 2026-03-29 01:17:22.881660 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-29 01:17:22.881668 | orchestrator | Sunday 29 March 2026 01:07:10 +0000 (0:00:00.798) 0:00:02.971 ********** 2026-03-29 01:17:22.881677 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:22.881686 | orchestrator | 2026-03-29 01:17:22.881719 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-29 01:17:22.881727 | orchestrator | Sunday 29 March 2026 01:07:11 +0000 (0:00:01.073) 0:00:04.045 ********** 2026-03-29 01:17:22.881736 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-29 01:17:22.881745 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-29 01:17:22.881751 | orchestrator | 2026-03-29 01:17:22.881758 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-29 01:17:22.881766 | orchestrator | Sunday 29 March 2026 01:07:14 +0000 (0:00:03.467) 0:00:07.513 ********** 2026-03-29 01:17:22.881774 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 01:17:22.881781 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-29 01:17:22.881787 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.881793 | orchestrator | 2026-03-29 01:17:22.881800 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-29 01:17:22.881807 | orchestrator | Sunday 29 March 2026 01:07:19 +0000 (0:00:04.378) 0:00:11.891 ********** 2026-03-29 01:17:22.881815 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.881822 | orchestrator | 2026-03-29 01:17:22.881829 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-29 01:17:22.881837 | orchestrator | Sunday 29 March 2026 01:07:20 +0000 (0:00:01.479) 0:00:13.375 ********** 2026-03-29 01:17:22.881844 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.881851 | orchestrator | 2026-03-29 01:17:22.881859 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-29 01:17:22.881866 | orchestrator | Sunday 29 March 2026 01:07:23 +0000 (0:00:02.297) 0:00:15.673 ********** 2026-03-29 01:17:22.881874 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.881881 | orchestrator | 2026-03-29 01:17:22.881889 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:17:22.881896 | orchestrator | Sunday 29 March 2026 01:07:26 +0000 (0:00:03.532) 0:00:19.205 ********** 2026-03-29 01:17:22.881902 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.881910 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.881916 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.881924 | orchestrator | 2026-03-29 01:17:22.881930 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-29 01:17:22.881937 | orchestrator | Sunday 29 March 2026 01:07:26 +0000 (0:00:00.323) 0:00:19.529 ********** 2026-03-29 01:17:22.881944 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.881951 | orchestrator | 2026-03-29 01:17:22.881957 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-29 01:17:22.881964 | orchestrator | Sunday 29 March 2026 01:07:57 +0000 (0:00:30.529) 0:00:50.058 ********** 2026-03-29 01:17:22.881971 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.881978 | orchestrator | 2026-03-29 01:17:22.881984 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 01:17:22.881991 | orchestrator | Sunday 29 March 2026 01:08:12 +0000 (0:00:15.383) 0:01:05.442 ********** 2026-03-29 01:17:22.881998 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.882004 | orchestrator | 2026-03-29 01:17:22.882011 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 01:17:22.882069 | orchestrator | Sunday 29 March 2026 01:08:25 +0000 (0:00:12.974) 0:01:18.417 ********** 2026-03-29 01:17:22.882095 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.882102 | orchestrator | 2026-03-29 01:17:22.882109 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-29 01:17:22.882115 | orchestrator | Sunday 29 March 2026 01:08:26 +0000 (0:00:01.151) 0:01:19.568 ********** 2026-03-29 01:17:22.882122 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.882128 | orchestrator | 2026-03-29 01:17:22.882135 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:17:22.882149 | orchestrator | Sunday 29 March 2026 01:08:27 +0000 (0:00:00.458) 0:01:20.027 ********** 2026-03-29 01:17:22.882157 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:22.882165 | orchestrator | 2026-03-29 01:17:22.882171 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-29 01:17:22.882177 | orchestrator | Sunday 29 March 2026 01:08:27 +0000 (0:00:00.563) 0:01:20.591 ********** 2026-03-29 01:17:22.882183 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.882189 | orchestrator | 2026-03-29 01:17:22.882196 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-29 01:17:22.882203 | orchestrator | Sunday 29 March 2026 01:08:45 +0000 (0:00:18.047) 0:01:38.638 ********** 2026-03-29 01:17:22.882209 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.882234 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882241 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882249 | orchestrator | 2026-03-29 01:17:22.882256 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-29 01:17:22.882264 | orchestrator | 2026-03-29 01:17:22.882271 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-29 01:17:22.882278 | orchestrator | Sunday 29 March 2026 01:08:46 +0000 (0:00:00.460) 0:01:39.099 ********** 2026-03-29 01:17:22.882285 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:22.882301 | orchestrator | 2026-03-29 01:17:22.882307 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-29 01:17:22.882315 | orchestrator | Sunday 29 March 2026 01:08:47 +0000 (0:00:00.548) 0:01:39.647 ********** 2026-03-29 01:17:22.882323 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882330 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882338 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.882345 | orchestrator | 2026-03-29 01:17:22.882353 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-29 01:17:22.882360 | orchestrator | Sunday 29 March 2026 01:08:49 +0000 (0:00:02.233) 0:01:41.881 ********** 2026-03-29 01:17:22.882368 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882505 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882513 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.882520 | orchestrator | 2026-03-29 01:17:22.882526 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-29 01:17:22.882533 | orchestrator | Sunday 29 March 2026 01:08:51 +0000 (0:00:02.224) 0:01:44.105 ********** 2026-03-29 01:17:22.882540 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.882548 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882554 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882561 | orchestrator | 2026-03-29 01:17:22.882607 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-29 01:17:22.882615 | orchestrator | Sunday 29 March 2026 01:08:51 +0000 (0:00:00.309) 0:01:44.415 ********** 2026-03-29 01:17:22.882622 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 01:17:22.882629 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882636 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 01:17:22.882644 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882659 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-29 01:17:22.882667 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-29 01:17:22.882674 | orchestrator | 2026-03-29 01:17:22.882681 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-29 01:17:22.882688 | orchestrator | Sunday 29 March 2026 01:09:00 +0000 (0:00:08.291) 0:01:52.706 ********** 2026-03-29 01:17:22.882717 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.882724 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882731 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882738 | orchestrator | 2026-03-29 01:17:22.882744 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-29 01:17:22.882751 | orchestrator | Sunday 29 March 2026 01:09:00 +0000 (0:00:00.303) 0:01:53.010 ********** 2026-03-29 01:17:22.882758 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-29 01:17:22.882765 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.882772 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-29 01:17:22.882779 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882786 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-29 01:17:22.882793 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882800 | orchestrator | 2026-03-29 01:17:22.882807 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-29 01:17:22.882813 | orchestrator | Sunday 29 March 2026 01:09:00 +0000 (0:00:00.571) 0:01:53.582 ********** 2026-03-29 01:17:22.882821 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882827 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882834 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.882841 | orchestrator | 2026-03-29 01:17:22.882848 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-29 01:17:22.882854 | orchestrator | Sunday 29 March 2026 01:09:01 +0000 (0:00:00.670) 0:01:54.252 ********** 2026-03-29 01:17:22.882861 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882868 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882875 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.882899 | orchestrator | 2026-03-29 01:17:22.882906 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-29 01:17:22.882913 | orchestrator | Sunday 29 March 2026 01:09:02 +0000 (0:00:00.912) 0:01:55.164 ********** 2026-03-29 01:17:22.882920 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882927 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882943 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.882950 | orchestrator | 2026-03-29 01:17:22.882956 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-29 01:17:22.882963 | orchestrator | Sunday 29 March 2026 01:09:04 +0000 (0:00:01.958) 0:01:57.123 ********** 2026-03-29 01:17:22.882970 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.882976 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.882989 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.882996 | orchestrator | 2026-03-29 01:17:22.883003 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 01:17:22.883009 | orchestrator | Sunday 29 March 2026 01:09:23 +0000 (0:00:19.216) 0:02:16.340 ********** 2026-03-29 01:17:22.883017 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.883023 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.883030 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.883036 | orchestrator | 2026-03-29 01:17:22.883044 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 01:17:22.883050 | orchestrator | Sunday 29 March 2026 01:09:37 +0000 (0:00:13.951) 0:02:30.291 ********** 2026-03-29 01:17:22.883057 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.883062 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.883068 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.883074 | orchestrator | 2026-03-29 01:17:22.883080 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-29 01:17:22.883094 | orchestrator | Sunday 29 March 2026 01:09:38 +0000 (0:00:00.843) 0:02:31.135 ********** 2026-03-29 01:17:22.883099 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.883105 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.883110 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.883115 | orchestrator | 2026-03-29 01:17:22.883121 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-29 01:17:22.883127 | orchestrator | Sunday 29 March 2026 01:09:52 +0000 (0:00:13.505) 0:02:44.641 ********** 2026-03-29 01:17:22.883132 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.883138 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.883144 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.883149 | orchestrator | 2026-03-29 01:17:22.883155 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-29 01:17:22.883160 | orchestrator | Sunday 29 March 2026 01:09:53 +0000 (0:00:01.087) 0:02:45.728 ********** 2026-03-29 01:17:22.883166 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.883171 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.883176 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.883202 | orchestrator | 2026-03-29 01:17:22.883208 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-29 01:17:22.883215 | orchestrator | 2026-03-29 01:17:22.883222 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:17:22.883229 | orchestrator | Sunday 29 March 2026 01:09:53 +0000 (0:00:00.507) 0:02:46.236 ********** 2026-03-29 01:17:22.883236 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:22.883244 | orchestrator | 2026-03-29 01:17:22.883251 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-29 01:17:22.883257 | orchestrator | Sunday 29 March 2026 01:09:54 +0000 (0:00:00.532) 0:02:46.768 ********** 2026-03-29 01:17:22.883263 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-29 01:17:22.883270 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-29 01:17:22.883276 | orchestrator | 2026-03-29 01:17:22.883282 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-29 01:17:22.883289 | orchestrator | Sunday 29 March 2026 01:09:58 +0000 (0:00:04.117) 0:02:50.886 ********** 2026-03-29 01:17:22.883296 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-29 01:17:22.883305 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-29 01:17:22.883312 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-29 01:17:22.883320 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-29 01:17:22.883327 | orchestrator | 2026-03-29 01:17:22.883333 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-29 01:17:22.883340 | orchestrator | Sunday 29 March 2026 01:10:04 +0000 (0:00:06.146) 0:02:57.033 ********** 2026-03-29 01:17:22.883347 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:17:22.883354 | orchestrator | 2026-03-29 01:17:22.883361 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-29 01:17:22.883368 | orchestrator | Sunday 29 March 2026 01:10:07 +0000 (0:00:03.066) 0:03:00.099 ********** 2026-03-29 01:17:22.883375 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:17:22.883381 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-29 01:17:22.883388 | orchestrator | 2026-03-29 01:17:22.883460 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-29 01:17:22.883468 | orchestrator | Sunday 29 March 2026 01:10:11 +0000 (0:00:04.203) 0:03:04.302 ********** 2026-03-29 01:17:22.883491 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:17:22.883498 | orchestrator | 2026-03-29 01:17:22.883505 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-29 01:17:22.883512 | orchestrator | Sunday 29 March 2026 01:10:14 +0000 (0:00:02.936) 0:03:07.239 ********** 2026-03-29 01:17:22.883519 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-29 01:17:22.883525 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-29 01:17:22.883532 | orchestrator | 2026-03-29 01:17:22.883539 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-29 01:17:22.883554 | orchestrator | Sunday 29 March 2026 01:10:22 +0000 (0:00:08.018) 0:03:15.258 ********** 2026-03-29 01:17:22.883573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.883585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.883594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.883623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.883634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.883641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.883649 | orchestrator | 2026-03-29 01:17:22.883656 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-29 01:17:22.883662 | orchestrator | Sunday 29 March 2026 01:10:23 +0000 (0:00:01.245) 0:03:16.503 ********** 2026-03-29 01:17:22.883669 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.883675 | orchestrator | 2026-03-29 01:17:22.883682 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-29 01:17:22.883688 | orchestrator | Sunday 29 March 2026 01:10:23 +0000 (0:00:00.131) 0:03:16.635 ********** 2026-03-29 01:17:22.883739 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.883747 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.883754 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.883760 | orchestrator | 2026-03-29 01:17:22.883767 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-29 01:17:22.883773 | orchestrator | Sunday 29 March 2026 01:10:24 +0000 (0:00:00.488) 0:03:17.124 ********** 2026-03-29 01:17:22.883780 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-29 01:17:22.883787 | orchestrator | 2026-03-29 01:17:22.883794 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-29 01:17:22.883801 | orchestrator | Sunday 29 March 2026 01:10:25 +0000 (0:00:00.728) 0:03:17.852 ********** 2026-03-29 01:17:22.883808 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.883815 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.883822 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.883829 | orchestrator | 2026-03-29 01:17:22.883836 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-29 01:17:22.883850 | orchestrator | Sunday 29 March 2026 01:10:25 +0000 (0:00:00.333) 0:03:18.185 ********** 2026-03-29 01:17:22.883857 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:22.883864 | orchestrator | 2026-03-29 01:17:22.883871 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-29 01:17:22.883877 | orchestrator | Sunday 29 March 2026 01:10:26 +0000 (0:00:00.534) 0:03:18.720 ********** 2026-03-29 01:17:22.883890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.883903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.883912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.883925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.883933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.883947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.883954 | orchestrator | 2026-03-29 01:17:22.883966 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-29 01:17:22.883973 | orchestrator | Sunday 29 March 2026 01:10:28 +0000 (0:00:02.688) 0:03:21.408 ********** 2026-03-29 01:17:22.883980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.883988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884002 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.884009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884030 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.884040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884060 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.884067 | orchestrator | 2026-03-29 01:17:22.884073 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-29 01:17:22.884079 | orchestrator | Sunday 29 March 2026 01:10:29 +0000 (0:00:00.585) 0:03:21.993 ********** 2026-03-29 01:17:22.884085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884100 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.884118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884138 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.884144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884158 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.884165 | orchestrator | 2026-03-29 01:17:22.884173 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-29 01:17:22.884179 | orchestrator | Sunday 29 March 2026 01:10:30 +0000 (0:00:00.798) 0:03:22.792 ********** 2026-03-29 01:17:22.884196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884261 | orchestrator | 2026-03-29 01:17:22.884268 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-29 01:17:22.884281 | orchestrator | Sunday 29 March 2026 01:10:32 +0000 (0:00:02.486) 0:03:25.279 ********** 2026-03-29 01:17:22.884288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884348 | orchestrator | 2026-03-29 01:17:22.884354 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-29 01:17:22.884361 | orchestrator | Sunday 29 March 2026 01:10:37 +0000 (0:00:05.265) 0:03:30.544 ********** 2026-03-29 01:17:22.884372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884393 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.884400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884422 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.884429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-29 01:17:22.884448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.884456 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.884464 | orchestrator | 2026-03-29 01:17:22.884471 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-29 01:17:22.884478 | orchestrator | Sunday 29 March 2026 01:10:38 +0000 (0:00:00.723) 0:03:31.268 ********** 2026-03-29 01:17:22.884490 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.884497 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.884504 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.884511 | orchestrator | 2026-03-29 01:17:22.884519 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-29 01:17:22.884525 | orchestrator | Sunday 29 March 2026 01:10:40 +0000 (0:00:01.469) 0:03:32.737 ********** 2026-03-29 01:17:22.884532 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.884538 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.884545 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.884551 | orchestrator | 2026-03-29 01:17:22.884557 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-29 01:17:22.884564 | orchestrator | Sunday 29 March 2026 01:10:40 +0000 (0:00:00.363) 0:03:33.100 ********** 2026-03-29 01:17:22.884572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:22.884614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.884636 | orchestrator | 2026-03-29 01:17:22.884644 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 01:17:22.884651 | orchestrator | Sunday 29 March 2026 01:10:42 +0000 (0:00:02.289) 0:03:35.389 ********** 2026-03-29 01:17:22.884657 | orchestrator | 2026-03-29 01:17:22.884664 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 01:17:22.884671 | orchestrator | Sunday 29 March 2026 01:10:42 +0000 (0:00:00.145) 0:03:35.535 ********** 2026-03-29 01:17:22.884678 | orchestrator | 2026-03-29 01:17:22.884685 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-29 01:17:22.884741 | orchestrator | Sunday 29 March 2026 01:10:43 +0000 (0:00:00.132) 0:03:35.668 ********** 2026-03-29 01:17:22.884751 | orchestrator | 2026-03-29 01:17:22.884758 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-29 01:17:22.884764 | orchestrator | Sunday 29 March 2026 01:10:43 +0000 (0:00:00.133) 0:03:35.801 ********** 2026-03-29 01:17:22.884771 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.884778 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.884785 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.884791 | orchestrator | 2026-03-29 01:17:22.884799 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-29 01:17:22.884805 | orchestrator | Sunday 29 March 2026 01:11:03 +0000 (0:00:20.115) 0:03:55.917 ********** 2026-03-29 01:17:22.884811 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.884818 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.884830 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.884836 | orchestrator | 2026-03-29 01:17:22.884842 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-29 01:17:22.884848 | orchestrator | 2026-03-29 01:17:22.884855 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:17:22.884861 | orchestrator | Sunday 29 March 2026 01:11:08 +0000 (0:00:05.587) 0:04:01.505 ********** 2026-03-29 01:17:22.884867 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:22.884875 | orchestrator | 2026-03-29 01:17:22.884887 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:17:22.884894 | orchestrator | Sunday 29 March 2026 01:11:10 +0000 (0:00:01.228) 0:04:02.734 ********** 2026-03-29 01:17:22.884900 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.884907 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.884913 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.884919 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.884926 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.884939 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.884946 | orchestrator | 2026-03-29 01:17:22.884954 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-29 01:17:22.884960 | orchestrator | Sunday 29 March 2026 01:11:10 +0000 (0:00:00.637) 0:04:03.371 ********** 2026-03-29 01:17:22.884967 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.884974 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.884981 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.884989 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:17:22.884995 | orchestrator | 2026-03-29 01:17:22.885001 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-29 01:17:22.885007 | orchestrator | Sunday 29 March 2026 01:11:11 +0000 (0:00:01.093) 0:04:04.465 ********** 2026-03-29 01:17:22.885014 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-29 01:17:22.885021 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-29 01:17:22.885028 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-29 01:17:22.885035 | orchestrator | 2026-03-29 01:17:22.885042 | orchestrator | TASK [module-load : Persist modules via modules-load.d]2026-03-29 01:17:22 | INFO  | Wait 1 second(s) until the next check 2026-03-29 01:17:22.885050 | orchestrator | ************************ 2026-03-29 01:17:22.885056 | orchestrator | Sunday 29 March 2026 01:11:12 +0000 (0:00:00.689) 0:04:05.154 ********** 2026-03-29 01:17:22.885062 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-29 01:17:22.885068 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-29 01:17:22.885073 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-29 01:17:22.885079 | orchestrator | 2026-03-29 01:17:22.885085 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-29 01:17:22.885091 | orchestrator | Sunday 29 March 2026 01:11:13 +0000 (0:00:01.358) 0:04:06.513 ********** 2026-03-29 01:17:22.885098 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-29 01:17:22.885105 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.885112 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-29 01:17:22.885118 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.885125 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-29 01:17:22.885132 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.885139 | orchestrator | 2026-03-29 01:17:22.885146 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-29 01:17:22.885153 | orchestrator | Sunday 29 March 2026 01:11:14 +0000 (0:00:00.649) 0:04:07.162 ********** 2026-03-29 01:17:22.885160 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 01:17:22.885176 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 01:17:22.885183 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.885190 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 01:17:22.885196 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 01:17:22.885203 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.885210 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 01:17:22.885216 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-29 01:17:22.885223 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-29 01:17:22.885230 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 01:17:22.885236 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.885243 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-29 01:17:22.885250 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 01:17:22.885256 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 01:17:22.885263 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-29 01:17:22.885270 | orchestrator | 2026-03-29 01:17:22.885277 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-29 01:17:22.885283 | orchestrator | Sunday 29 March 2026 01:11:15 +0000 (0:00:01.266) 0:04:08.429 ********** 2026-03-29 01:17:22.885290 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.885297 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.885304 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.885310 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.885317 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.885323 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.885330 | orchestrator | 2026-03-29 01:17:22.885336 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-29 01:17:22.885343 | orchestrator | Sunday 29 March 2026 01:11:16 +0000 (0:00:01.192) 0:04:09.621 ********** 2026-03-29 01:17:22.885350 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.885357 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.885363 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.885369 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.885376 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.885383 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.885390 | orchestrator | 2026-03-29 01:17:22.885397 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-29 01:17:22.885411 | orchestrator | Sunday 29 March 2026 01:11:19 +0000 (0:00:02.066) 0:04:11.688 ********** 2026-03-29 01:17:22.885426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885506 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885539 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.885991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886085 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886094 | orchestrator | 2026-03-29 01:17:22.886102 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:17:22.886110 | orchestrator | Sunday 29 March 2026 01:11:21 +0000 (0:00:01.970) 0:04:13.658 ********** 2026-03-29 01:17:22.886116 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:22.886125 | orchestrator | 2026-03-29 01:17:22.886132 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-29 01:17:22.886140 | orchestrator | Sunday 29 March 2026 01:11:22 +0000 (0:00:01.220) 0:04:14.878 ********** 2026-03-29 01:17:22.886148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886188 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.886309 | orchestrator | 2026-03-29 01:17:22.886315 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-29 01:17:22.886322 | orchestrator | Sunday 29 March 2026 01:11:25 +0000 (0:00:03.343) 0:04:18.222 ********** 2026-03-29 01:17:22.886334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.886341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.886348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.886371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.886384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886392 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.886400 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.886406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.886412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.886418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886432 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.886441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.886447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886454 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.886465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.886473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886480 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.886487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.886494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886511 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.886518 | orchestrator | 2026-03-29 01:17:22.886525 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-29 01:17:22.886532 | orchestrator | Sunday 29 March 2026 01:11:27 +0000 (0:00:01.760) 0:04:19.982 ********** 2026-03-29 01:17:22.886545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.886557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.886565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886573 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.886581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.886589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.886602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886610 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.886623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.886635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.886643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886651 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.886659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.886672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886680 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.886691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.886725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886732 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.886746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.886754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.886762 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.886769 | orchestrator | 2026-03-29 01:17:22.886776 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:17:22.886784 | orchestrator | Sunday 29 March 2026 01:11:29 +0000 (0:00:02.125) 0:04:22.108 ********** 2026-03-29 01:17:22.886791 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.886799 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.886805 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.886814 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-29 01:17:22.886827 | orchestrator | 2026-03-29 01:17:22.886839 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-29 01:17:22.886848 | orchestrator | Sunday 29 March 2026 01:11:30 +0000 (0:00:01.026) 0:04:23.134 ********** 2026-03-29 01:17:22.886855 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:17:22.886863 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 01:17:22.886872 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 01:17:22.886880 | orchestrator | 2026-03-29 01:17:22.886887 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-29 01:17:22.886895 | orchestrator | Sunday 29 March 2026 01:11:31 +0000 (0:00:00.990) 0:04:24.125 ********** 2026-03-29 01:17:22.886902 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:17:22.886909 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-29 01:17:22.886916 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-29 01:17:22.886923 | orchestrator | 2026-03-29 01:17:22.886930 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-29 01:17:22.886937 | orchestrator | Sunday 29 March 2026 01:11:32 +0000 (0:00:00.943) 0:04:25.068 ********** 2026-03-29 01:17:22.886946 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:17:22.886955 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:17:22.886963 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:17:22.886970 | orchestrator | 2026-03-29 01:17:22.886977 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-29 01:17:22.886984 | orchestrator | Sunday 29 March 2026 01:11:32 +0000 (0:00:00.483) 0:04:25.552 ********** 2026-03-29 01:17:22.886991 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:17:22.886998 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:17:22.887005 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:17:22.887012 | orchestrator | 2026-03-29 01:17:22.887019 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-29 01:17:22.887026 | orchestrator | Sunday 29 March 2026 01:11:33 +0000 (0:00:00.762) 0:04:26.315 ********** 2026-03-29 01:17:22.887033 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 01:17:22.887040 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 01:17:22.887047 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 01:17:22.887054 | orchestrator | 2026-03-29 01:17:22.887060 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-29 01:17:22.887066 | orchestrator | Sunday 29 March 2026 01:11:35 +0000 (0:00:01.365) 0:04:27.680 ********** 2026-03-29 01:17:22.887072 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 01:17:22.887078 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 01:17:22.887089 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 01:17:22.887096 | orchestrator | 2026-03-29 01:17:22.887103 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-29 01:17:22.887110 | orchestrator | Sunday 29 March 2026 01:11:36 +0000 (0:00:01.195) 0:04:28.875 ********** 2026-03-29 01:17:22.887116 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-29 01:17:22.887123 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-29 01:17:22.887130 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-29 01:17:22.887137 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-29 01:17:22.887144 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-29 01:17:22.887151 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-29 01:17:22.887157 | orchestrator | 2026-03-29 01:17:22.887164 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-29 01:17:22.887171 | orchestrator | Sunday 29 March 2026 01:11:39 +0000 (0:00:03.584) 0:04:32.460 ********** 2026-03-29 01:17:22.887178 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.887184 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.887191 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.887204 | orchestrator | 2026-03-29 01:17:22.887214 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-29 01:17:22.887221 | orchestrator | Sunday 29 March 2026 01:11:40 +0000 (0:00:00.575) 0:04:33.036 ********** 2026-03-29 01:17:22.887228 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.887235 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.887242 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.887248 | orchestrator | 2026-03-29 01:17:22.887255 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-29 01:17:22.887262 | orchestrator | Sunday 29 March 2026 01:11:40 +0000 (0:00:00.315) 0:04:33.351 ********** 2026-03-29 01:17:22.887269 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.887275 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.887282 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.887289 | orchestrator | 2026-03-29 01:17:22.887295 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-29 01:17:22.887302 | orchestrator | Sunday 29 March 2026 01:11:42 +0000 (0:00:01.405) 0:04:34.757 ********** 2026-03-29 01:17:22.887310 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 01:17:22.887316 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 01:17:22.887322 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-29 01:17:22.887328 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 01:17:22.887335 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 01:17:22.887342 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-29 01:17:22.887348 | orchestrator | 2026-03-29 01:17:22.887355 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-29 01:17:22.887362 | orchestrator | Sunday 29 March 2026 01:11:45 +0000 (0:00:03.550) 0:04:38.308 ********** 2026-03-29 01:17:22.887369 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 01:17:22.887376 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 01:17:22.887383 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 01:17:22.887389 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-29 01:17:22.887396 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.887401 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-29 01:17:22.887408 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.887414 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-29 01:17:22.887421 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.887428 | orchestrator | 2026-03-29 01:17:22.887435 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-29 01:17:22.887441 | orchestrator | Sunday 29 March 2026 01:11:49 +0000 (0:00:03.657) 0:04:41.965 ********** 2026-03-29 01:17:22.887448 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.887455 | orchestrator | 2026-03-29 01:17:22.887462 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-29 01:17:22.887468 | orchestrator | Sunday 29 March 2026 01:11:49 +0000 (0:00:00.142) 0:04:42.108 ********** 2026-03-29 01:17:22.887475 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.887482 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.887489 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.887496 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.887503 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.887509 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.887522 | orchestrator | 2026-03-29 01:17:22.887529 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-29 01:17:22.887535 | orchestrator | Sunday 29 March 2026 01:11:50 +0000 (0:00:00.573) 0:04:42.681 ********** 2026-03-29 01:17:22.887542 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-29 01:17:22.887549 | orchestrator | 2026-03-29 01:17:22.887556 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-29 01:17:22.887562 | orchestrator | Sunday 29 March 2026 01:11:50 +0000 (0:00:00.718) 0:04:43.399 ********** 2026-03-29 01:17:22.887569 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.887576 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.887582 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.887589 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.887601 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.887608 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.887614 | orchestrator | 2026-03-29 01:17:22.887621 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-29 01:17:22.887628 | orchestrator | Sunday 29 March 2026 01:11:51 +0000 (0:00:00.767) 0:04:44.167 ********** 2026-03-29 01:17:22.887642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887825 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887831 | orchestrator | 2026-03-29 01:17:22.887838 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-29 01:17:22.887850 | orchestrator | Sunday 29 March 2026 01:11:55 +0000 (0:00:03.492) 0:04:47.659 ********** 2026-03-29 01:17:22.887858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.887871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.887882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.887889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.887897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.887911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.887918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.887992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.888013 | orchestrator | 2026-03-29 01:17:22.888023 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-29 01:17:22.888030 | orchestrator | Sunday 29 March 2026 01:12:01 +0000 (0:00:06.413) 0:04:54.073 ********** 2026-03-29 01:17:22.888037 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.888044 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.888051 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.888057 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888063 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888069 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888074 | orchestrator | 2026-03-29 01:17:22.888080 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-29 01:17:22.888086 | orchestrator | Sunday 29 March 2026 01:12:02 +0000 (0:00:01.329) 0:04:55.402 ********** 2026-03-29 01:17:22.888093 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 01:17:22.888100 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 01:17:22.888106 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-29 01:17:22.888113 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 01:17:22.888126 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888132 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 01:17:22.888139 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 01:17:22.888145 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888151 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 01:17:22.888157 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-29 01:17:22.888164 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888170 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-29 01:17:22.888177 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 01:17:22.888184 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 01:17:22.888190 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-29 01:17:22.888196 | orchestrator | 2026-03-29 01:17:22.888203 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-29 01:17:22.888209 | orchestrator | Sunday 29 March 2026 01:12:06 +0000 (0:00:03.698) 0:04:59.101 ********** 2026-03-29 01:17:22.888216 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.888222 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.888228 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.888235 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888241 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888248 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888255 | orchestrator | 2026-03-29 01:17:22.888262 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-29 01:17:22.888268 | orchestrator | Sunday 29 March 2026 01:12:07 +0000 (0:00:00.591) 0:04:59.693 ********** 2026-03-29 01:17:22.888275 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 01:17:22.888282 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 01:17:22.888289 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-29 01:17:22.888295 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 01:17:22.888302 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 01:17:22.888308 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-29 01:17:22.888314 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 01:17:22.888326 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 01:17:22.888334 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-29 01:17:22.888341 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 01:17:22.888347 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888354 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 01:17:22.888361 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888368 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-29 01:17:22.888379 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888387 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:17:22.888471 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:17:22.888479 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:17:22.888486 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:17:22.888492 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:17:22.888499 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-29 01:17:22.888506 | orchestrator | 2026-03-29 01:17:22.888513 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-29 01:17:22.888519 | orchestrator | Sunday 29 March 2026 01:12:11 +0000 (0:00:04.717) 0:05:04.411 ********** 2026-03-29 01:17:22.888526 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 01:17:22.888533 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 01:17:22.888539 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-29 01:17:22.888546 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:17:22.888553 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:17:22.888559 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 01:17:22.888566 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-29 01:17:22.888572 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 01:17:22.888579 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-29 01:17:22.888586 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 01:17:22.888593 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 01:17:22.888599 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-29 01:17:22.888605 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 01:17:22.888612 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888619 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:17:22.888626 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 01:17:22.888631 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888637 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-29 01:17:22.888643 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888649 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:17:22.888656 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-29 01:17:22.888664 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:17:22.888671 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:17:22.888678 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-29 01:17:22.888684 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:17:22.888720 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:17:22.888728 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-29 01:17:22.888734 | orchestrator | 2026-03-29 01:17:22.888741 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-29 01:17:22.888747 | orchestrator | Sunday 29 March 2026 01:12:19 +0000 (0:00:07.259) 0:05:11.670 ********** 2026-03-29 01:17:22.888753 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.888766 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.888773 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.888780 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888787 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888793 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888800 | orchestrator | 2026-03-29 01:17:22.888807 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-29 01:17:22.888813 | orchestrator | Sunday 29 March 2026 01:12:19 +0000 (0:00:00.626) 0:05:12.296 ********** 2026-03-29 01:17:22.888820 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.888827 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.888834 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.888841 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888847 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888855 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888861 | orchestrator | 2026-03-29 01:17:22.888868 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-29 01:17:22.888875 | orchestrator | Sunday 29 March 2026 01:12:20 +0000 (0:00:00.521) 0:05:12.818 ********** 2026-03-29 01:17:22.888881 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.888888 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.888895 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.888907 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.888914 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.888921 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.888928 | orchestrator | 2026-03-29 01:17:22.888935 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-29 01:17:22.888942 | orchestrator | Sunday 29 March 2026 01:12:22 +0000 (0:00:01.999) 0:05:14.817 ********** 2026-03-29 01:17:22.888949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.888957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.888964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.888978 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.888988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.888995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.889007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.889014 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.889021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-29 01:17:22.889033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-29 01:17:22.889040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.889051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.889062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.889068 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.889074 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.889080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.889086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.889098 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.889104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-29 01:17:22.889110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-29 01:17:22.889117 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.889124 | orchestrator | 2026-03-29 01:17:22.889131 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-29 01:17:22.889137 | orchestrator | Sunday 29 March 2026 01:12:23 +0000 (0:00:01.406) 0:05:16.224 ********** 2026-03-29 01:17:22.889144 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-29 01:17:22.889151 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-29 01:17:22.889158 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.889165 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-29 01:17:22.889172 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-29 01:17:22.889186 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.889194 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-29 01:17:22.889200 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-29 01:17:22.889208 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.889214 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-29 01:17:22.889220 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-29 01:17:22.889226 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.889233 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-29 01:17:22.889239 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-29 01:17:22.889246 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.889253 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-29 01:17:22.889260 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-29 01:17:22.889267 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.889274 | orchestrator | 2026-03-29 01:17:22.889280 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-29 01:17:22.889291 | orchestrator | Sunday 29 March 2026 01:12:24 +0000 (0:00:00.844) 0:05:17.069 ********** 2026-03-29 01:17:22.889299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:22.889445 | orchestrator | 2026-03-29 01:17:22.889452 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-29 01:17:22.889459 | orchestrator | Sunday 29 March 2026 01:12:27 +0000 (0:00:02.668) 0:05:19.737 ********** 2026-03-29 01:17:22.889465 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.889472 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.889479 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.889486 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.889493 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.889500 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.889506 | orchestrator | 2026-03-29 01:17:22.889513 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:17:22.889519 | orchestrator | Sunday 29 March 2026 01:12:27 +0000 (0:00:00.782) 0:05:20.520 ********** 2026-03-29 01:17:22.889526 | orchestrator | 2026-03-29 01:17:22.889533 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:17:22.889540 | orchestrator | Sunday 29 March 2026 01:12:28 +0000 (0:00:00.148) 0:05:20.668 ********** 2026-03-29 01:17:22.889546 | orchestrator | 2026-03-29 01:17:22.889553 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:17:22.889560 | orchestrator | Sunday 29 March 2026 01:12:28 +0000 (0:00:00.138) 0:05:20.807 ********** 2026-03-29 01:17:22.889567 | orchestrator | 2026-03-29 01:17:22.889574 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:17:22.889585 | orchestrator | Sunday 29 March 2026 01:12:28 +0000 (0:00:00.143) 0:05:20.950 ********** 2026-03-29 01:17:22.889592 | orchestrator | 2026-03-29 01:17:22.889599 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:17:22.889605 | orchestrator | Sunday 29 March 2026 01:12:28 +0000 (0:00:00.136) 0:05:21.087 ********** 2026-03-29 01:17:22.889612 | orchestrator | 2026-03-29 01:17:22.889619 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-29 01:17:22.889630 | orchestrator | Sunday 29 March 2026 01:12:28 +0000 (0:00:00.143) 0:05:21.230 ********** 2026-03-29 01:17:22.889636 | orchestrator | 2026-03-29 01:17:22.889644 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-29 01:17:22.889651 | orchestrator | Sunday 29 March 2026 01:12:28 +0000 (0:00:00.291) 0:05:21.521 ********** 2026-03-29 01:17:22.889658 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.889665 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.889673 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.889680 | orchestrator | 2026-03-29 01:17:22.889687 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-29 01:17:22.889751 | orchestrator | Sunday 29 March 2026 01:12:40 +0000 (0:00:11.839) 0:05:33.361 ********** 2026-03-29 01:17:22.889765 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.889772 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.889779 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.889786 | orchestrator | 2026-03-29 01:17:22.889792 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-29 01:17:22.889799 | orchestrator | Sunday 29 March 2026 01:12:54 +0000 (0:00:13.389) 0:05:46.751 ********** 2026-03-29 01:17:22.889806 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.889813 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.889819 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.889826 | orchestrator | 2026-03-29 01:17:22.889833 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-29 01:17:22.889839 | orchestrator | Sunday 29 March 2026 01:13:15 +0000 (0:00:21.285) 0:06:08.036 ********** 2026-03-29 01:17:22.889846 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.889852 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.889859 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.889865 | orchestrator | 2026-03-29 01:17:22.889872 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-29 01:17:22.889879 | orchestrator | Sunday 29 March 2026 01:13:55 +0000 (0:00:39.787) 0:06:47.824 ********** 2026-03-29 01:17:22.889886 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.889893 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.889899 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.889905 | orchestrator | 2026-03-29 01:17:22.889912 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-29 01:17:22.889919 | orchestrator | Sunday 29 March 2026 01:13:55 +0000 (0:00:00.712) 0:06:48.536 ********** 2026-03-29 01:17:22.889925 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.889931 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.889937 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.889943 | orchestrator | 2026-03-29 01:17:22.889950 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-29 01:17:22.889957 | orchestrator | Sunday 29 March 2026 01:13:56 +0000 (0:00:00.713) 0:06:49.250 ********** 2026-03-29 01:17:22.889963 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:17:22.889970 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:17:22.889976 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:17:22.889983 | orchestrator | 2026-03-29 01:17:22.889990 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-29 01:17:22.889996 | orchestrator | Sunday 29 March 2026 01:14:17 +0000 (0:00:21.287) 0:07:10.538 ********** 2026-03-29 01:17:22.890003 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.890009 | orchestrator | 2026-03-29 01:17:22.890056 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-29 01:17:22.890063 | orchestrator | Sunday 29 March 2026 01:14:18 +0000 (0:00:00.137) 0:07:10.675 ********** 2026-03-29 01:17:22.890069 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.890075 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.890081 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.890096 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.890103 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.890111 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-29 01:17:22.890119 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:17:22.890126 | orchestrator | 2026-03-29 01:17:22.890132 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-29 01:17:22.890139 | orchestrator | Sunday 29 March 2026 01:14:39 +0000 (0:00:21.156) 0:07:31.832 ********** 2026-03-29 01:17:22.890146 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.890154 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.890162 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.890169 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.890177 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.890185 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.890192 | orchestrator | 2026-03-29 01:17:22.890199 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-29 01:17:22.890207 | orchestrator | Sunday 29 March 2026 01:14:47 +0000 (0:00:08.627) 0:07:40.459 ********** 2026-03-29 01:17:22.890214 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.890221 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.890229 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.890237 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.890244 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.890252 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-29 01:17:22.890259 | orchestrator | 2026-03-29 01:17:22.890267 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-29 01:17:22.890274 | orchestrator | Sunday 29 March 2026 01:14:51 +0000 (0:00:03.493) 0:07:43.953 ********** 2026-03-29 01:17:22.890287 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:17:22.890295 | orchestrator | 2026-03-29 01:17:22.890302 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-29 01:17:22.890310 | orchestrator | Sunday 29 March 2026 01:15:05 +0000 (0:00:13.831) 0:07:57.785 ********** 2026-03-29 01:17:22.890317 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:17:22.890324 | orchestrator | 2026-03-29 01:17:22.890332 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-29 01:17:22.890339 | orchestrator | Sunday 29 March 2026 01:15:06 +0000 (0:00:01.269) 0:07:59.055 ********** 2026-03-29 01:17:22.890346 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.890354 | orchestrator | 2026-03-29 01:17:22.890361 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-29 01:17:22.890369 | orchestrator | Sunday 29 March 2026 01:15:07 +0000 (0:00:01.363) 0:08:00.419 ********** 2026-03-29 01:17:22.890376 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:17:22.890383 | orchestrator | 2026-03-29 01:17:22.890391 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-29 01:17:22.890403 | orchestrator | Sunday 29 March 2026 01:15:19 +0000 (0:00:12.132) 0:08:12.551 ********** 2026-03-29 01:17:22.890410 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:17:22.890416 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:17:22.890423 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:17:22.890430 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:22.890437 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:22.890443 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:22.890449 | orchestrator | 2026-03-29 01:17:22.890456 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-29 01:17:22.890463 | orchestrator | 2026-03-29 01:17:22.890470 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-29 01:17:22.890477 | orchestrator | Sunday 29 March 2026 01:15:21 +0000 (0:00:01.809) 0:08:14.360 ********** 2026-03-29 01:17:22.890489 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:22.890496 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:22.890503 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:22.890510 | orchestrator | 2026-03-29 01:17:22.890517 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-29 01:17:22.890524 | orchestrator | 2026-03-29 01:17:22.890530 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-29 01:17:22.890537 | orchestrator | Sunday 29 March 2026 01:15:22 +0000 (0:00:01.102) 0:08:15.462 ********** 2026-03-29 01:17:22.890544 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.890550 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.890556 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.890562 | orchestrator | 2026-03-29 01:17:22.890570 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-29 01:17:22.890577 | orchestrator | 2026-03-29 01:17:22.890583 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-29 01:17:22.890590 | orchestrator | Sunday 29 March 2026 01:15:23 +0000 (0:00:00.537) 0:08:16.000 ********** 2026-03-29 01:17:22.890597 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-29 01:17:22.890604 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-29 01:17:22.890611 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-29 01:17:22.890619 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-29 01:17:22.890626 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-29 01:17:22.890632 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-29 01:17:22.890639 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-29 01:17:22.890646 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-29 01:17:22.890653 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-29 01:17:22.890660 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-29 01:17:22.890666 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-29 01:17:22.890673 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-29 01:17:22.890680 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:17:22.890687 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-29 01:17:22.890708 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-29 01:17:22.890716 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-29 01:17:22.890723 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-29 01:17:22.890729 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-29 01:17:22.890736 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-29 01:17:22.890743 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:17:22.890750 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-29 01:17:22.890756 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-29 01:17:22.890763 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-29 01:17:22.890770 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-29 01:17:22.890776 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-29 01:17:22.890783 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-29 01:17:22.890789 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:17:22.890796 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-29 01:17:22.890803 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-29 01:17:22.890810 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-29 01:17:22.890817 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-29 01:17:22.890823 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-29 01:17:22.890841 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-29 01:17:22.890848 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.890855 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.890861 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-29 01:17:22.890868 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-29 01:17:22.890874 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-29 01:17:22.890881 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-29 01:17:22.890888 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-29 01:17:22.890895 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-29 01:17:22.890902 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.890908 | orchestrator | 2026-03-29 01:17:22.890914 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-29 01:17:22.890921 | orchestrator | 2026-03-29 01:17:22.890928 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-29 01:17:22.890934 | orchestrator | Sunday 29 March 2026 01:15:24 +0000 (0:00:01.328) 0:08:17.329 ********** 2026-03-29 01:17:22.890940 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-29 01:17:22.890951 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-29 01:17:22.890957 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.890963 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-29 01:17:22.890969 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-29 01:17:22.890974 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.890980 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-29 01:17:22.890986 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-29 01:17:22.890991 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.890998 | orchestrator | 2026-03-29 01:17:22.891004 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-29 01:17:22.891010 | orchestrator | 2026-03-29 01:17:22.891017 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-29 01:17:22.891024 | orchestrator | Sunday 29 March 2026 01:15:25 +0000 (0:00:00.795) 0:08:18.125 ********** 2026-03-29 01:17:22.891031 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.891038 | orchestrator | 2026-03-29 01:17:22.891045 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-29 01:17:22.891051 | orchestrator | 2026-03-29 01:17:22.891058 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-29 01:17:22.891064 | orchestrator | Sunday 29 March 2026 01:15:26 +0000 (0:00:00.674) 0:08:18.800 ********** 2026-03-29 01:17:22.891069 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:22.891075 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:22.891082 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:22.891088 | orchestrator | 2026-03-29 01:17:22.891094 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:17:22.891100 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:17:22.891108 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-29 01:17:22.891115 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-29 01:17:22.891122 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-29 01:17:22.891129 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-29 01:17:22.891142 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-29 01:17:22.891149 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-29 01:17:22.891156 | orchestrator | 2026-03-29 01:17:22.891163 | orchestrator | 2026-03-29 01:17:22.891170 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:17:22.891176 | orchestrator | Sunday 29 March 2026 01:15:26 +0000 (0:00:00.412) 0:08:19.212 ********** 2026-03-29 01:17:22.891183 | orchestrator | =============================================================================== 2026-03-29 01:17:22.891190 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 39.79s 2026-03-29 01:17:22.891197 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.53s 2026-03-29 01:17:22.891203 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.29s 2026-03-29 01:17:22.891210 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.29s 2026-03-29 01:17:22.891216 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.16s 2026-03-29 01:17:22.891222 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.12s 2026-03-29 01:17:22.891227 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.22s 2026-03-29 01:17:22.891233 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.05s 2026-03-29 01:17:22.891239 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.38s 2026-03-29 01:17:22.891250 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.95s 2026-03-29 01:17:22.891256 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.83s 2026-03-29 01:17:22.891262 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.51s 2026-03-29 01:17:22.891268 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.39s 2026-03-29 01:17:22.891275 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.97s 2026-03-29 01:17:22.891282 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.13s 2026-03-29 01:17:22.891288 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.84s 2026-03-29 01:17:22.891295 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.63s 2026-03-29 01:17:22.891302 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.29s 2026-03-29 01:17:22.891309 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.02s 2026-03-29 01:17:22.891315 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.26s 2026-03-29 01:17:25.924702 | orchestrator | 2026-03-29 01:17:25 | INFO  | Task ac3837b9-09b0-4de4-9bb1-a2f297ab672d is in state SUCCESS 2026-03-29 01:17:25.926328 | orchestrator | 2026-03-29 01:17:25.926378 | orchestrator | 2026-03-29 01:17:25.926386 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:17:25.926392 | orchestrator | 2026-03-29 01:17:25.926398 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:17:25.926404 | orchestrator | Sunday 29 March 2026 01:11:15 +0000 (0:00:00.254) 0:00:00.254 ********** 2026-03-29 01:17:25.926410 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.926416 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:25.926421 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:25.926427 | orchestrator | 2026-03-29 01:17:25.926432 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:17:25.926438 | orchestrator | Sunday 29 March 2026 01:11:15 +0000 (0:00:00.299) 0:00:00.553 ********** 2026-03-29 01:17:25.926443 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-29 01:17:25.926467 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-29 01:17:25.926473 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-29 01:17:25.926478 | orchestrator | 2026-03-29 01:17:25.926484 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-29 01:17:25.926489 | orchestrator | 2026-03-29 01:17:25.926495 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:17:25.926503 | orchestrator | Sunday 29 March 2026 01:11:16 +0000 (0:00:00.552) 0:00:01.105 ********** 2026-03-29 01:17:25.926514 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:25.926528 | orchestrator | 2026-03-29 01:17:25.926538 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-29 01:17:25.926547 | orchestrator | Sunday 29 March 2026 01:11:17 +0000 (0:00:00.539) 0:00:01.645 ********** 2026-03-29 01:17:25.926557 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-29 01:17:25.926568 | orchestrator | 2026-03-29 01:17:25.926576 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-29 01:17:25.926582 | orchestrator | Sunday 29 March 2026 01:11:20 +0000 (0:00:03.536) 0:00:05.181 ********** 2026-03-29 01:17:25.926587 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-29 01:17:25.926593 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-29 01:17:25.926599 | orchestrator | 2026-03-29 01:17:25.926604 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-29 01:17:25.926609 | orchestrator | Sunday 29 March 2026 01:11:26 +0000 (0:00:05.683) 0:00:10.864 ********** 2026-03-29 01:17:25.926615 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-29 01:17:25.926620 | orchestrator | 2026-03-29 01:17:25.926626 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-29 01:17:25.926631 | orchestrator | Sunday 29 March 2026 01:11:29 +0000 (0:00:02.969) 0:00:13.834 ********** 2026-03-29 01:17:25.926637 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-29 01:17:25.926642 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-29 01:17:25.926648 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-29 01:17:25.926653 | orchestrator | 2026-03-29 01:17:25.926659 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-29 01:17:25.926664 | orchestrator | Sunday 29 March 2026 01:11:37 +0000 (0:00:08.758) 0:00:22.593 ********** 2026-03-29 01:17:25.926669 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-29 01:17:25.926675 | orchestrator | 2026-03-29 01:17:25.926680 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-29 01:17:25.926686 | orchestrator | Sunday 29 March 2026 01:11:41 +0000 (0:00:03.330) 0:00:25.924 ********** 2026-03-29 01:17:25.926691 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-29 01:17:25.926696 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-29 01:17:25.926702 | orchestrator | 2026-03-29 01:17:25.926727 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-29 01:17:25.926736 | orchestrator | Sunday 29 March 2026 01:11:49 +0000 (0:00:07.949) 0:00:33.874 ********** 2026-03-29 01:17:25.926741 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-29 01:17:25.926747 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-29 01:17:25.926760 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-29 01:17:25.926766 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-29 01:17:25.926772 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-29 01:17:25.926777 | orchestrator | 2026-03-29 01:17:25.926782 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:17:25.926794 | orchestrator | Sunday 29 March 2026 01:12:05 +0000 (0:00:16.299) 0:00:50.173 ********** 2026-03-29 01:17:25.926800 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:25.926805 | orchestrator | 2026-03-29 01:17:25.926811 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-29 01:17:25.926816 | orchestrator | Sunday 29 March 2026 01:12:06 +0000 (0:00:00.585) 0:00:50.758 ********** 2026-03-29 01:17:25.926821 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.926827 | orchestrator | 2026-03-29 01:17:25.926832 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-29 01:17:25.926838 | orchestrator | Sunday 29 March 2026 01:12:11 +0000 (0:00:05.272) 0:00:56.031 ********** 2026-03-29 01:17:25.926843 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.926849 | orchestrator | 2026-03-29 01:17:25.926854 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-29 01:17:25.926870 | orchestrator | Sunday 29 March 2026 01:12:15 +0000 (0:00:04.529) 0:01:00.560 ********** 2026-03-29 01:17:25.926876 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.926881 | orchestrator | 2026-03-29 01:17:25.926887 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-29 01:17:25.926892 | orchestrator | Sunday 29 March 2026 01:12:19 +0000 (0:00:03.791) 0:01:04.352 ********** 2026-03-29 01:17:25.926898 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-29 01:17:25.926903 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-29 01:17:25.926908 | orchestrator | 2026-03-29 01:17:25.926914 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-29 01:17:25.926919 | orchestrator | Sunday 29 March 2026 01:12:30 +0000 (0:00:10.731) 0:01:15.084 ********** 2026-03-29 01:17:25.926926 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-29 01:17:25.926932 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-29 01:17:25.926939 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-29 01:17:25.926947 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-29 01:17:25.926953 | orchestrator | 2026-03-29 01:17:25.926959 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-29 01:17:25.926965 | orchestrator | Sunday 29 March 2026 01:12:45 +0000 (0:00:14.803) 0:01:29.887 ********** 2026-03-29 01:17:25.926971 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.926977 | orchestrator | 2026-03-29 01:17:25.926983 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-29 01:17:25.926989 | orchestrator | Sunday 29 March 2026 01:12:50 +0000 (0:00:05.062) 0:01:34.950 ********** 2026-03-29 01:17:25.926995 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927002 | orchestrator | 2026-03-29 01:17:25.927008 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-29 01:17:25.927014 | orchestrator | Sunday 29 March 2026 01:12:55 +0000 (0:00:05.361) 0:01:40.311 ********** 2026-03-29 01:17:25.927020 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:25.927026 | orchestrator | 2026-03-29 01:17:25.927033 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-29 01:17:25.927039 | orchestrator | Sunday 29 March 2026 01:12:55 +0000 (0:00:00.221) 0:01:40.532 ********** 2026-03-29 01:17:25.927045 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.927051 | orchestrator | 2026-03-29 01:17:25.927057 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:17:25.927063 | orchestrator | Sunday 29 March 2026 01:12:59 +0000 (0:00:03.371) 0:01:43.905 ********** 2026-03-29 01:17:25.927073 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-1, testbed-node-2, testbed-node-0 2026-03-29 01:17:25.927079 | orchestrator | 2026-03-29 01:17:25.927085 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-29 01:17:25.927092 | orchestrator | Sunday 29 March 2026 01:13:00 +0000 (0:00:01.300) 0:01:45.205 ********** 2026-03-29 01:17:25.927097 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.927104 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927110 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.927116 | orchestrator | 2026-03-29 01:17:25.927122 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-29 01:17:25.927128 | orchestrator | Sunday 29 March 2026 01:13:05 +0000 (0:00:04.694) 0:01:49.899 ********** 2026-03-29 01:17:25.927134 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927140 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.927146 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.927152 | orchestrator | 2026-03-29 01:17:25.927158 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-29 01:17:25.927164 | orchestrator | Sunday 29 March 2026 01:13:09 +0000 (0:00:04.014) 0:01:53.914 ********** 2026-03-29 01:17:25.927463 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927475 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.927484 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.927493 | orchestrator | 2026-03-29 01:17:25.927502 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-29 01:17:25.927517 | orchestrator | Sunday 29 March 2026 01:13:10 +0000 (0:00:00.778) 0:01:54.692 ********** 2026-03-29 01:17:25.927527 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:25.927536 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.927542 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:25.927547 | orchestrator | 2026-03-29 01:17:25.927553 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-29 01:17:25.927558 | orchestrator | Sunday 29 March 2026 01:13:11 +0000 (0:00:01.779) 0:01:56.472 ********** 2026-03-29 01:17:25.927563 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.927569 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.927574 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927579 | orchestrator | 2026-03-29 01:17:25.927585 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-29 01:17:25.927590 | orchestrator | Sunday 29 March 2026 01:13:13 +0000 (0:00:01.325) 0:01:57.798 ********** 2026-03-29 01:17:25.927595 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927601 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.927606 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.927611 | orchestrator | 2026-03-29 01:17:25.927617 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-29 01:17:25.927622 | orchestrator | Sunday 29 March 2026 01:13:14 +0000 (0:00:01.134) 0:01:58.932 ********** 2026-03-29 01:17:25.927628 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.927633 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.927638 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927644 | orchestrator | 2026-03-29 01:17:25.927655 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-29 01:17:25.927661 | orchestrator | Sunday 29 March 2026 01:13:16 +0000 (0:00:02.084) 0:02:01.017 ********** 2026-03-29 01:17:25.927666 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.927672 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.927677 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.927683 | orchestrator | 2026-03-29 01:17:25.927688 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-29 01:17:25.927694 | orchestrator | Sunday 29 March 2026 01:13:18 +0000 (0:00:01.857) 0:02:02.875 ********** 2026-03-29 01:17:25.927699 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.927730 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:25.927736 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:25.927742 | orchestrator | 2026-03-29 01:17:25.927747 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-29 01:17:25.927753 | orchestrator | Sunday 29 March 2026 01:13:18 +0000 (0:00:00.680) 0:02:03.555 ********** 2026-03-29 01:17:25.927758 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:25.927764 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:25.927769 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.927775 | orchestrator | 2026-03-29 01:17:25.927780 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:17:25.927785 | orchestrator | Sunday 29 March 2026 01:13:22 +0000 (0:00:03.166) 0:02:06.722 ********** 2026-03-29 01:17:25.927791 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:25.927797 | orchestrator | 2026-03-29 01:17:25.927802 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-29 01:17:25.927807 | orchestrator | Sunday 29 March 2026 01:13:22 +0000 (0:00:00.796) 0:02:07.519 ********** 2026-03-29 01:17:25.927813 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.927818 | orchestrator | 2026-03-29 01:17:25.927824 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-29 01:17:25.927829 | orchestrator | Sunday 29 March 2026 01:13:26 +0000 (0:00:03.633) 0:02:11.152 ********** 2026-03-29 01:17:25.927835 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.927840 | orchestrator | 2026-03-29 01:17:25.927846 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-29 01:17:25.927852 | orchestrator | Sunday 29 March 2026 01:13:29 +0000 (0:00:02.764) 0:02:13.916 ********** 2026-03-29 01:17:25.927862 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-29 01:17:25.927877 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-29 01:17:25.928114 | orchestrator | 2026-03-29 01:17:25.928120 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-29 01:17:25.928126 | orchestrator | Sunday 29 March 2026 01:13:35 +0000 (0:00:05.774) 0:02:19.691 ********** 2026-03-29 01:17:25.928131 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.928137 | orchestrator | 2026-03-29 01:17:25.928143 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-29 01:17:25.928200 | orchestrator | Sunday 29 March 2026 01:13:38 +0000 (0:00:03.371) 0:02:23.062 ********** 2026-03-29 01:17:25.928207 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:17:25.928213 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:17:25.928218 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:17:25.928224 | orchestrator | 2026-03-29 01:17:25.928229 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-29 01:17:25.928238 | orchestrator | Sunday 29 March 2026 01:13:38 +0000 (0:00:00.338) 0:02:23.401 ********** 2026-03-29 01:17:25.928251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.928281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.928294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.928310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.928316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.928322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.928331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928444 | orchestrator | 2026-03-29 01:17:25.928450 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-29 01:17:25.928456 | orchestrator | Sunday 29 March 2026 01:13:41 +0000 (0:00:02.393) 0:02:25.794 ********** 2026-03-29 01:17:25.928462 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:25.928467 | orchestrator | 2026-03-29 01:17:25.928473 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-29 01:17:25.928478 | orchestrator | Sunday 29 March 2026 01:13:41 +0000 (0:00:00.136) 0:02:25.931 ********** 2026-03-29 01:17:25.928483 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:25.928489 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:25.928495 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:25.928500 | orchestrator | 2026-03-29 01:17:25.928509 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-29 01:17:25.928518 | orchestrator | Sunday 29 March 2026 01:13:41 +0000 (0:00:00.490) 0:02:26.422 ********** 2026-03-29 01:17:25.928528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.928544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.928555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.928575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.928587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.928597 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:25.928633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.928643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.928649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.928655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.928671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.928677 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:25.928701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.928764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.928776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.928784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.928790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.928800 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:25.928806 | orchestrator | 2026-03-29 01:17:25.928811 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:17:25.928817 | orchestrator | Sunday 29 March 2026 01:13:42 +0000 (0:00:00.679) 0:02:27.101 ********** 2026-03-29 01:17:25.928822 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:17:25.928828 | orchestrator | 2026-03-29 01:17:25.928833 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-29 01:17:25.928839 | orchestrator | Sunday 29 March 2026 01:13:43 +0000 (0:00:00.595) 0:02:27.696 ********** 2026-03-29 01:17:25.928848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.928874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.928881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.928897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.928903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.928912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.928918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.928989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929035 | orchestrator | 2026-03-29 01:17:25.929045 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-29 01:17:25.929054 | orchestrator | Sunday 29 March 2026 01:13:48 +0000 (0:00:05.036) 0:02:32.733 ********** 2026-03-29 01:17:25.929064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.929075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.929081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.929108 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:25.929114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.929124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.929130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.929150 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:25.929160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.929167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.929176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.929193 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:25.929200 | orchestrator | 2026-03-29 01:17:25.929205 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-29 01:17:25.929211 | orchestrator | Sunday 29 March 2026 01:13:48 +0000 (0:00:00.713) 0:02:33.447 ********** 2026-03-29 01:17:25.929219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.929229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.929235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.929257 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:25.929266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.929272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.929286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.929321 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:25.929331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-29 01:17:25.929341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-29 01:17:25.929355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-29 01:17:25.929380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-29 01:17:25.929396 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:25.929405 | orchestrator | 2026-03-29 01:17:25.929413 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-29 01:17:25.929422 | orchestrator | Sunday 29 March 2026 01:13:49 +0000 (0:00:00.857) 0:02:34.304 ********** 2026-03-29 01:17:25.929431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.929441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.929456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.929472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.929505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.929516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.929526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929607 | orchestrator | 2026-03-29 01:17:25.929612 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-29 01:17:25.929618 | orchestrator | Sunday 29 March 2026 01:13:54 +0000 (0:00:04.760) 0:02:39.065 ********** 2026-03-29 01:17:25.929623 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 01:17:25.929632 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 01:17:25.929639 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-29 01:17:25.929648 | orchestrator | 2026-03-29 01:17:25.929661 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-29 01:17:25.929671 | orchestrator | Sunday 29 March 2026 01:13:56 +0000 (0:00:01.826) 0:02:40.891 ********** 2026-03-29 01:17:25.929687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.929704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.929787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.929798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.929812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.929822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.929841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.929927 | orchestrator | 2026-03-29 01:17:25.929935 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-29 01:17:25.929943 | orchestrator | Sunday 29 March 2026 01:14:13 +0000 (0:00:17.449) 0:02:58.340 ********** 2026-03-29 01:17:25.929952 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.929961 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.929970 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.929979 | orchestrator | 2026-03-29 01:17:25.929987 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-29 01:17:25.929995 | orchestrator | Sunday 29 March 2026 01:14:15 +0000 (0:00:01.473) 0:02:59.814 ********** 2026-03-29 01:17:25.930003 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930047 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930057 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930063 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930069 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930074 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930079 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930085 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930091 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930096 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930102 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930107 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930113 | orchestrator | 2026-03-29 01:17:25.930119 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-29 01:17:25.930130 | orchestrator | Sunday 29 March 2026 01:14:20 +0000 (0:00:05.629) 0:03:05.443 ********** 2026-03-29 01:17:25.930136 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930141 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930147 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930152 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930158 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930163 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930169 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930178 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930184 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930189 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930195 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930200 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930205 | orchestrator | 2026-03-29 01:17:25.930210 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-29 01:17:25.930216 | orchestrator | Sunday 29 March 2026 01:14:26 +0000 (0:00:05.509) 0:03:10.953 ********** 2026-03-29 01:17:25.930221 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930227 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930232 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-29 01:17:25.930237 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930242 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930247 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-29 01:17:25.930253 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930258 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930268 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-29 01:17:25.930274 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930279 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930284 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-29 01:17:25.930289 | orchestrator | 2026-03-29 01:17:25.930295 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-29 01:17:25.930300 | orchestrator | Sunday 29 March 2026 01:14:31 +0000 (0:00:04.769) 0:03:15.722 ********** 2026-03-29 01:17:25.930305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.930311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.930325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-29 01:17:25.930331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.930340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.930346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-29 01:17:25.930351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-29 01:17:25.930414 | orchestrator | 2026-03-29 01:17:25.930419 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-29 01:17:25.930424 | orchestrator | Sunday 29 March 2026 01:14:35 +0000 (0:00:04.257) 0:03:19.979 ********** 2026-03-29 01:17:25.930429 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:17:25.930434 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:17:25.930439 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:17:25.930444 | orchestrator | 2026-03-29 01:17:25.930449 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-29 01:17:25.930455 | orchestrator | Sunday 29 March 2026 01:14:35 +0000 (0:00:00.318) 0:03:20.297 ********** 2026-03-29 01:17:25.930460 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930465 | orchestrator | 2026-03-29 01:17:25.930470 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-29 01:17:25.930475 | orchestrator | Sunday 29 March 2026 01:14:37 +0000 (0:00:01.995) 0:03:22.293 ********** 2026-03-29 01:17:25.930480 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930485 | orchestrator | 2026-03-29 01:17:25.930490 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-29 01:17:25.930495 | orchestrator | Sunday 29 March 2026 01:14:39 +0000 (0:00:02.225) 0:03:24.518 ********** 2026-03-29 01:17:25.930514 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930523 | orchestrator | 2026-03-29 01:17:25.930535 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-29 01:17:25.930546 | orchestrator | Sunday 29 March 2026 01:14:42 +0000 (0:00:03.068) 0:03:27.587 ********** 2026-03-29 01:17:25.930559 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930567 | orchestrator | 2026-03-29 01:17:25.930576 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-29 01:17:25.930585 | orchestrator | Sunday 29 March 2026 01:14:45 +0000 (0:00:02.859) 0:03:30.447 ********** 2026-03-29 01:17:25.930593 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930602 | orchestrator | 2026-03-29 01:17:25.930611 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 01:17:25.930620 | orchestrator | Sunday 29 March 2026 01:15:07 +0000 (0:00:21.396) 0:03:51.844 ********** 2026-03-29 01:17:25.930629 | orchestrator | 2026-03-29 01:17:25.930637 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 01:17:25.930643 | orchestrator | Sunday 29 March 2026 01:15:07 +0000 (0:00:00.069) 0:03:51.913 ********** 2026-03-29 01:17:25.930648 | orchestrator | 2026-03-29 01:17:25.930653 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-29 01:17:25.930660 | orchestrator | Sunday 29 March 2026 01:15:07 +0000 (0:00:00.063) 0:03:51.977 ********** 2026-03-29 01:17:25.930668 | orchestrator | 2026-03-29 01:17:25.930677 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-29 01:17:25.930692 | orchestrator | Sunday 29 March 2026 01:15:07 +0000 (0:00:00.069) 0:03:52.046 ********** 2026-03-29 01:17:25.930723 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.930733 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930741 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.930749 | orchestrator | 2026-03-29 01:17:25.930757 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-29 01:17:25.930765 | orchestrator | Sunday 29 March 2026 01:15:17 +0000 (0:00:10.005) 0:04:02.052 ********** 2026-03-29 01:17:25.930774 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930783 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.930791 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.930800 | orchestrator | 2026-03-29 01:17:25.930808 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-29 01:17:25.930818 | orchestrator | Sunday 29 March 2026 01:15:29 +0000 (0:00:11.612) 0:04:13.665 ********** 2026-03-29 01:17:25.930826 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930840 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.930850 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.930858 | orchestrator | 2026-03-29 01:17:25.930866 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-29 01:17:25.930875 | orchestrator | Sunday 29 March 2026 01:15:33 +0000 (0:00:04.858) 0:04:18.523 ********** 2026-03-29 01:17:25.930883 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.930891 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930899 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.930907 | orchestrator | 2026-03-29 01:17:25.930915 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-29 01:17:25.930923 | orchestrator | Sunday 29 March 2026 01:15:43 +0000 (0:00:09.579) 0:04:28.103 ********** 2026-03-29 01:17:25.930931 | orchestrator | changed: [testbed-node-1] 2026-03-29 01:17:25.930940 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:17:25.930949 | orchestrator | changed: [testbed-node-2] 2026-03-29 01:17:25.930957 | orchestrator | 2026-03-29 01:17:25.930965 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:17:25.930974 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-29 01:17:25.930982 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:17:25.930990 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-29 01:17:25.930999 | orchestrator | 2026-03-29 01:17:25.931009 | orchestrator | 2026-03-29 01:17:25.931018 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:17:25.931027 | orchestrator | Sunday 29 March 2026 01:15:53 +0000 (0:00:09.997) 0:04:38.101 ********** 2026-03-29 01:17:25.931036 | orchestrator | =============================================================================== 2026-03-29 01:17:25.931045 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.40s 2026-03-29 01:17:25.931054 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.45s 2026-03-29 01:17:25.931060 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.30s 2026-03-29 01:17:25.931065 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.80s 2026-03-29 01:17:25.931070 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.61s 2026-03-29 01:17:25.931075 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.73s 2026-03-29 01:17:25.931080 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.01s 2026-03-29 01:17:25.931085 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.00s 2026-03-29 01:17:25.931090 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.58s 2026-03-29 01:17:25.931102 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.76s 2026-03-29 01:17:25.931107 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.95s 2026-03-29 01:17:25.931112 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.77s 2026-03-29 01:17:25.931122 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.68s 2026-03-29 01:17:25.931128 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.63s 2026-03-29 01:17:25.931133 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.51s 2026-03-29 01:17:25.931138 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.36s 2026-03-29 01:17:25.931143 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.27s 2026-03-29 01:17:25.931148 | orchestrator | octavia : Create loadbalancer management network ------------------------ 5.06s 2026-03-29 01:17:25.931153 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.04s 2026-03-29 01:17:25.931159 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 4.86s 2026-03-29 01:17:25.931164 | orchestrator | 2026-03-29 01:17:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:28.969183 | orchestrator | 2026-03-29 01:17:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:32.006154 | orchestrator | 2026-03-29 01:17:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:35.048231 | orchestrator | 2026-03-29 01:17:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:38.096441 | orchestrator | 2026-03-29 01:17:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:41.133999 | orchestrator | 2026-03-29 01:17:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:44.171754 | orchestrator | 2026-03-29 01:17:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:47.206576 | orchestrator | 2026-03-29 01:17:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:50.240180 | orchestrator | 2026-03-29 01:17:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:53.277373 | orchestrator | 2026-03-29 01:17:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:56.319802 | orchestrator | 2026-03-29 01:17:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:17:59.353409 | orchestrator | 2026-03-29 01:17:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:02.391679 | orchestrator | 2026-03-29 01:18:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:05.430252 | orchestrator | 2026-03-29 01:18:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:08.468778 | orchestrator | 2026-03-29 01:18:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:11.508081 | orchestrator | 2026-03-29 01:18:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:14.544549 | orchestrator | 2026-03-29 01:18:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:17.583580 | orchestrator | 2026-03-29 01:18:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:20.623902 | orchestrator | 2026-03-29 01:18:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:23.665254 | orchestrator | 2026-03-29 01:18:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-29 01:18:26.707223 | orchestrator | 2026-03-29 01:18:26.997405 | orchestrator | 2026-03-29 01:18:27.002147 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Mar 29 01:18:27 UTC 2026 2026-03-29 01:18:27.002200 | orchestrator | 2026-03-29 01:18:27.370297 | orchestrator | ok: Runtime: 0:35:20.202864 2026-03-29 01:18:27.632239 | 2026-03-29 01:18:27.632386 | TASK [Bootstrap services] 2026-03-29 01:18:28.374506 | orchestrator | 2026-03-29 01:18:28.374604 | orchestrator | # BOOTSTRAP 2026-03-29 01:18:28.374617 | orchestrator | 2026-03-29 01:18:28.374626 | orchestrator | + set -e 2026-03-29 01:18:28.374634 | orchestrator | + echo 2026-03-29 01:18:28.374643 | orchestrator | + echo '# BOOTSTRAP' 2026-03-29 01:18:28.374654 | orchestrator | + echo 2026-03-29 01:18:28.374677 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-29 01:18:28.383876 | orchestrator | + set -e 2026-03-29 01:18:28.383928 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-29 01:18:32.660008 | orchestrator | 2026-03-29 01:18:32 | INFO  | It takes a moment until task b71be264-e8f4-4f53-bc4d-8c3b563b5f47 (flavor-manager) has been started and output is visible here. 2026-03-29 01:18:39.785865 | orchestrator | 2026-03-29 01:18:35 | INFO  | Flavor SCS-1L-1 created 2026-03-29 01:18:39.785974 | orchestrator | 2026-03-29 01:18:35 | INFO  | Flavor SCS-1L-1-5 created 2026-03-29 01:18:39.785988 | orchestrator | 2026-03-29 01:18:35 | INFO  | Flavor SCS-1V-2 created 2026-03-29 01:18:39.785994 | orchestrator | 2026-03-29 01:18:35 | INFO  | Flavor SCS-1V-2-5 created 2026-03-29 01:18:39.786000 | orchestrator | 2026-03-29 01:18:35 | INFO  | Flavor SCS-1V-4 created 2026-03-29 01:18:39.786077 | orchestrator | 2026-03-29 01:18:36 | INFO  | Flavor SCS-1V-4-10 created 2026-03-29 01:18:39.786084 | orchestrator | 2026-03-29 01:18:36 | INFO  | Flavor SCS-1V-8 created 2026-03-29 01:18:39.786092 | orchestrator | 2026-03-29 01:18:36 | INFO  | Flavor SCS-1V-8-20 created 2026-03-29 01:18:39.786109 | orchestrator | 2026-03-29 01:18:36 | INFO  | Flavor SCS-2V-4 created 2026-03-29 01:18:39.786116 | orchestrator | 2026-03-29 01:18:36 | INFO  | Flavor SCS-2V-4-10 created 2026-03-29 01:18:39.786123 | orchestrator | 2026-03-29 01:18:36 | INFO  | Flavor SCS-2V-8 created 2026-03-29 01:18:39.786129 | orchestrator | 2026-03-29 01:18:37 | INFO  | Flavor SCS-2V-8-20 created 2026-03-29 01:18:39.786136 | orchestrator | 2026-03-29 01:18:37 | INFO  | Flavor SCS-2V-16 created 2026-03-29 01:18:39.786163 | orchestrator | 2026-03-29 01:18:37 | INFO  | Flavor SCS-2V-16-50 created 2026-03-29 01:18:39.786171 | orchestrator | 2026-03-29 01:18:37 | INFO  | Flavor SCS-4V-8 created 2026-03-29 01:18:39.786177 | orchestrator | 2026-03-29 01:18:37 | INFO  | Flavor SCS-4V-8-20 created 2026-03-29 01:18:39.786183 | orchestrator | 2026-03-29 01:18:37 | INFO  | Flavor SCS-4V-16 created 2026-03-29 01:18:39.786189 | orchestrator | 2026-03-29 01:18:37 | INFO  | Flavor SCS-4V-16-50 created 2026-03-29 01:18:39.786195 | orchestrator | 2026-03-29 01:18:38 | INFO  | Flavor SCS-4V-32 created 2026-03-29 01:18:39.786202 | orchestrator | 2026-03-29 01:18:38 | INFO  | Flavor SCS-4V-32-100 created 2026-03-29 01:18:39.786209 | orchestrator | 2026-03-29 01:18:38 | INFO  | Flavor SCS-8V-16 created 2026-03-29 01:18:39.786215 | orchestrator | 2026-03-29 01:18:38 | INFO  | Flavor SCS-8V-16-50 created 2026-03-29 01:18:39.786222 | orchestrator | 2026-03-29 01:18:38 | INFO  | Flavor SCS-8V-32 created 2026-03-29 01:18:39.786230 | orchestrator | 2026-03-29 01:18:38 | INFO  | Flavor SCS-8V-32-100 created 2026-03-29 01:18:39.786234 | orchestrator | 2026-03-29 01:18:38 | INFO  | Flavor SCS-16V-32 created 2026-03-29 01:18:39.786239 | orchestrator | 2026-03-29 01:18:39 | INFO  | Flavor SCS-16V-32-100 created 2026-03-29 01:18:39.786243 | orchestrator | 2026-03-29 01:18:39 | INFO  | Flavor SCS-2V-4-20s created 2026-03-29 01:18:39.786249 | orchestrator | 2026-03-29 01:18:39 | INFO  | Flavor SCS-4V-8-50s created 2026-03-29 01:18:39.786256 | orchestrator | 2026-03-29 01:18:39 | INFO  | Flavor SCS-8V-32-100s created 2026-03-29 01:18:42.119836 | orchestrator | 2026-03-29 01:18:42 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-29 01:18:52.202882 | orchestrator | 2026-03-29 01:18:52 | INFO  | Task 25c82fae-94bf-4561-9fab-ce8a3edc04bb (bootstrap-basic) was prepared for execution. 2026-03-29 01:18:52.202957 | orchestrator | 2026-03-29 01:18:52 | INFO  | It takes a moment until task 25c82fae-94bf-4561-9fab-ce8a3edc04bb (bootstrap-basic) has been started and output is visible here. 2026-03-29 01:19:38.199407 | orchestrator | 2026-03-29 01:19:38.199459 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-29 01:19:38.199466 | orchestrator | 2026-03-29 01:19:38.199470 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-29 01:19:38.199474 | orchestrator | Sunday 29 March 2026 01:18:56 +0000 (0:00:00.069) 0:00:00.069 ********** 2026-03-29 01:19:38.199478 | orchestrator | ok: [localhost] 2026-03-29 01:19:38.199482 | orchestrator | 2026-03-29 01:19:38.199486 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-29 01:19:38.199490 | orchestrator | Sunday 29 March 2026 01:18:58 +0000 (0:00:01.794) 0:00:01.864 ********** 2026-03-29 01:19:38.199494 | orchestrator | ok: [localhost] 2026-03-29 01:19:38.199498 | orchestrator | 2026-03-29 01:19:38.199502 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-29 01:19:38.199506 | orchestrator | Sunday 29 March 2026 01:19:07 +0000 (0:00:08.905) 0:00:10.769 ********** 2026-03-29 01:19:38.199510 | orchestrator | changed: [localhost] 2026-03-29 01:19:38.199514 | orchestrator | 2026-03-29 01:19:38.199517 | orchestrator | TASK [Create public network] *************************************************** 2026-03-29 01:19:38.199521 | orchestrator | Sunday 29 March 2026 01:19:14 +0000 (0:00:07.433) 0:00:18.203 ********** 2026-03-29 01:19:38.199525 | orchestrator | changed: [localhost] 2026-03-29 01:19:38.199531 | orchestrator | 2026-03-29 01:19:38.199537 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-29 01:19:38.199544 | orchestrator | Sunday 29 March 2026 01:19:20 +0000 (0:00:05.824) 0:00:24.028 ********** 2026-03-29 01:19:38.199557 | orchestrator | changed: [localhost] 2026-03-29 01:19:38.199563 | orchestrator | 2026-03-29 01:19:38.199570 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-29 01:19:38.199576 | orchestrator | Sunday 29 March 2026 01:19:26 +0000 (0:00:06.029) 0:00:30.058 ********** 2026-03-29 01:19:38.199582 | orchestrator | changed: [localhost] 2026-03-29 01:19:38.199588 | orchestrator | 2026-03-29 01:19:38.199594 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-29 01:19:38.199600 | orchestrator | Sunday 29 March 2026 01:19:30 +0000 (0:00:04.377) 0:00:34.435 ********** 2026-03-29 01:19:38.199605 | orchestrator | changed: [localhost] 2026-03-29 01:19:38.199612 | orchestrator | 2026-03-29 01:19:38.199618 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-29 01:19:38.199648 | orchestrator | Sunday 29 March 2026 01:19:34 +0000 (0:00:03.721) 0:00:38.157 ********** 2026-03-29 01:19:38.199656 | orchestrator | ok: [localhost] 2026-03-29 01:19:38.199662 | orchestrator | 2026-03-29 01:19:38.199669 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:19:38.199675 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-29 01:19:38.199682 | orchestrator | 2026-03-29 01:19:38.199689 | orchestrator | 2026-03-29 01:19:38.199693 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:19:38.199697 | orchestrator | Sunday 29 March 2026 01:19:37 +0000 (0:00:03.470) 0:00:41.627 ********** 2026-03-29 01:19:38.199701 | orchestrator | =============================================================================== 2026-03-29 01:19:38.199705 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.91s 2026-03-29 01:19:38.199709 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.43s 2026-03-29 01:19:38.199713 | orchestrator | Set public network to default ------------------------------------------- 6.03s 2026-03-29 01:19:38.199716 | orchestrator | Create public network --------------------------------------------------- 5.82s 2026-03-29 01:19:38.199730 | orchestrator | Create public subnet ---------------------------------------------------- 4.38s 2026-03-29 01:19:38.199734 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.72s 2026-03-29 01:19:38.199738 | orchestrator | Create manager role ----------------------------------------------------- 3.47s 2026-03-29 01:19:38.199742 | orchestrator | Gathering Facts --------------------------------------------------------- 1.79s 2026-03-29 01:19:40.680631 | orchestrator | 2026-03-29 01:19:40 | INFO  | It takes a moment until task eef47715-a841-4d97-aaf0-41a234020b01 (image-manager) has been started and output is visible here. 2026-03-29 01:20:21.302622 | orchestrator | 2026-03-29 01:19:43 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-29 01:20:21.302705 | orchestrator | 2026-03-29 01:19:43 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-29 01:20:21.302716 | orchestrator | 2026-03-29 01:19:43 | INFO  | Importing image Cirros 0.6.2 2026-03-29 01:20:21.302723 | orchestrator | 2026-03-29 01:19:43 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-29 01:20:21.302730 | orchestrator | 2026-03-29 01:19:45 | INFO  | Waiting for image to leave queued state... 2026-03-29 01:20:21.302738 | orchestrator | 2026-03-29 01:19:47 | INFO  | Waiting for import to complete... 2026-03-29 01:20:21.302744 | orchestrator | 2026-03-29 01:19:58 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-29 01:20:21.302750 | orchestrator | 2026-03-29 01:19:58 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-29 01:20:21.302757 | orchestrator | 2026-03-29 01:19:58 | INFO  | Setting internal_version = 0.6.2 2026-03-29 01:20:21.302763 | orchestrator | 2026-03-29 01:19:58 | INFO  | Setting image_original_user = cirros 2026-03-29 01:20:21.302770 | orchestrator | 2026-03-29 01:19:58 | INFO  | Adding tag os:cirros 2026-03-29 01:20:21.302776 | orchestrator | 2026-03-29 01:19:58 | INFO  | Setting property architecture: x86_64 2026-03-29 01:20:21.302782 | orchestrator | 2026-03-29 01:19:59 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 01:20:21.302788 | orchestrator | 2026-03-29 01:19:59 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 01:20:21.302794 | orchestrator | 2026-03-29 01:19:59 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 01:20:21.302800 | orchestrator | 2026-03-29 01:19:59 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 01:20:21.302806 | orchestrator | 2026-03-29 01:19:59 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 01:20:21.302812 | orchestrator | 2026-03-29 01:19:59 | INFO  | Setting property os_distro: cirros 2026-03-29 01:20:21.302818 | orchestrator | 2026-03-29 01:20:00 | INFO  | Setting property os_purpose: minimal 2026-03-29 01:20:21.302823 | orchestrator | 2026-03-29 01:20:00 | INFO  | Setting property replace_frequency: never 2026-03-29 01:20:21.302829 | orchestrator | 2026-03-29 01:20:00 | INFO  | Setting property uuid_validity: none 2026-03-29 01:20:21.302835 | orchestrator | 2026-03-29 01:20:00 | INFO  | Setting property provided_until: none 2026-03-29 01:20:21.302841 | orchestrator | 2026-03-29 01:20:00 | INFO  | Setting property image_description: Cirros 2026-03-29 01:20:21.302847 | orchestrator | 2026-03-29 01:20:01 | INFO  | Setting property image_name: Cirros 2026-03-29 01:20:21.302852 | orchestrator | 2026-03-29 01:20:01 | INFO  | Setting property internal_version: 0.6.2 2026-03-29 01:20:21.302858 | orchestrator | 2026-03-29 01:20:01 | INFO  | Setting property image_original_user: cirros 2026-03-29 01:20:21.302883 | orchestrator | 2026-03-29 01:20:01 | INFO  | Setting property os_version: 0.6.2 2026-03-29 01:20:21.302895 | orchestrator | 2026-03-29 01:20:01 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-29 01:20:21.302903 | orchestrator | 2026-03-29 01:20:02 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-29 01:20:21.302908 | orchestrator | 2026-03-29 01:20:02 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-29 01:20:21.302914 | orchestrator | 2026-03-29 01:20:02 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-29 01:20:21.302919 | orchestrator | 2026-03-29 01:20:02 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-29 01:20:21.302924 | orchestrator | 2026-03-29 01:20:02 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-29 01:20:21.302933 | orchestrator | 2026-03-29 01:20:02 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-29 01:20:21.302939 | orchestrator | 2026-03-29 01:20:02 | INFO  | Importing image Cirros 0.6.3 2026-03-29 01:20:21.302945 | orchestrator | 2026-03-29 01:20:02 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-29 01:20:21.302951 | orchestrator | 2026-03-29 01:20:03 | INFO  | Waiting for image to leave queued state... 2026-03-29 01:20:21.302956 | orchestrator | 2026-03-29 01:20:06 | INFO  | Waiting for import to complete... 2026-03-29 01:20:21.302975 | orchestrator | 2026-03-29 01:20:16 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-29 01:20:21.302981 | orchestrator | 2026-03-29 01:20:16 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-29 01:20:21.302987 | orchestrator | 2026-03-29 01:20:16 | INFO  | Setting internal_version = 0.6.3 2026-03-29 01:20:21.302992 | orchestrator | 2026-03-29 01:20:16 | INFO  | Setting image_original_user = cirros 2026-03-29 01:20:21.302998 | orchestrator | 2026-03-29 01:20:16 | INFO  | Adding tag os:cirros 2026-03-29 01:20:21.303003 | orchestrator | 2026-03-29 01:20:16 | INFO  | Setting property architecture: x86_64 2026-03-29 01:20:21.303009 | orchestrator | 2026-03-29 01:20:17 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 01:20:21.303014 | orchestrator | 2026-03-29 01:20:17 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 01:20:21.303020 | orchestrator | 2026-03-29 01:20:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 01:20:21.303026 | orchestrator | 2026-03-29 01:20:17 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 01:20:21.303031 | orchestrator | 2026-03-29 01:20:17 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 01:20:21.303037 | orchestrator | 2026-03-29 01:20:18 | INFO  | Setting property os_distro: cirros 2026-03-29 01:20:21.303042 | orchestrator | 2026-03-29 01:20:18 | INFO  | Setting property os_purpose: minimal 2026-03-29 01:20:21.303048 | orchestrator | 2026-03-29 01:20:18 | INFO  | Setting property replace_frequency: never 2026-03-29 01:20:21.303053 | orchestrator | 2026-03-29 01:20:18 | INFO  | Setting property uuid_validity: none 2026-03-29 01:20:21.303059 | orchestrator | 2026-03-29 01:20:18 | INFO  | Setting property provided_until: none 2026-03-29 01:20:21.303065 | orchestrator | 2026-03-29 01:20:19 | INFO  | Setting property image_description: Cirros 2026-03-29 01:20:21.303071 | orchestrator | 2026-03-29 01:20:19 | INFO  | Setting property image_name: Cirros 2026-03-29 01:20:21.303076 | orchestrator | 2026-03-29 01:20:19 | INFO  | Setting property internal_version: 0.6.3 2026-03-29 01:20:21.303087 | orchestrator | 2026-03-29 01:20:19 | INFO  | Setting property image_original_user: cirros 2026-03-29 01:20:21.303093 | orchestrator | 2026-03-29 01:20:19 | INFO  | Setting property os_version: 0.6.3 2026-03-29 01:20:21.303099 | orchestrator | 2026-03-29 01:20:20 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-29 01:20:21.303105 | orchestrator | 2026-03-29 01:20:20 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-29 01:20:21.303110 | orchestrator | 2026-03-29 01:20:20 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-29 01:20:21.303116 | orchestrator | 2026-03-29 01:20:20 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-29 01:20:21.303121 | orchestrator | 2026-03-29 01:20:20 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-29 01:20:21.588466 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-29 01:20:23.817131 | orchestrator | 2026-03-29 01:20:23 | INFO  | date: 2026-03-28 2026-03-29 01:20:23.817197 | orchestrator | 2026-03-29 01:20:23 | INFO  | image: octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:20:23.817221 | orchestrator | 2026-03-29 01:20:23 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:20:23.817436 | orchestrator | 2026-03-29 01:20:23 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2.CHECKSUM 2026-03-29 01:20:23.986185 | orchestrator | 2026-03-29 01:20:23 | INFO  | checksum: d8129f2399256e335fa58752e7bcbe178527a1e3d0a6709e3e9c03f99848308a 2026-03-29 01:20:24.069616 | orchestrator | 2026-03-29 01:20:24 | INFO  | It takes a moment until task 10d4f111-19d7-4cc1-86d6-0fac9df5f475 (image-manager) has been started and output is visible here. 2026-03-29 01:21:25.396922 | orchestrator | 2026-03-29 01:20:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:21:25.396995 | orchestrator | 2026-03-29 01:20:26 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2: 200 2026-03-29 01:21:25.397008 | orchestrator | 2026-03-29 01:20:26 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-28 2026-03-29 01:21:25.397017 | orchestrator | 2026-03-29 01:20:26 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:21:25.397026 | orchestrator | 2026-03-29 01:20:27 | INFO  | Waiting for image to leave queued state... 2026-03-29 01:21:25.397034 | orchestrator | 2026-03-29 01:20:29 | INFO  | Waiting for import to complete... 2026-03-29 01:21:25.397043 | orchestrator | 2026-03-29 01:20:39 | INFO  | Waiting for import to complete... 2026-03-29 01:21:25.397051 | orchestrator | 2026-03-29 01:20:49 | INFO  | Waiting for import to complete... 2026-03-29 01:21:25.397059 | orchestrator | 2026-03-29 01:21:00 | INFO  | Waiting for import to complete... 2026-03-29 01:21:25.397069 | orchestrator | 2026-03-29 01:21:10 | INFO  | Waiting for import to complete... 2026-03-29 01:21:25.397077 | orchestrator | 2026-03-29 01:21:20 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-28' successfully completed, reloading images 2026-03-29 01:21:25.397086 | orchestrator | 2026-03-29 01:21:20 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:21:25.397094 | orchestrator | 2026-03-29 01:21:20 | INFO  | Setting internal_version = 2026-03-28 2026-03-29 01:21:25.397119 | orchestrator | 2026-03-29 01:21:20 | INFO  | Setting image_original_user = ubuntu 2026-03-29 01:21:25.397128 | orchestrator | 2026-03-29 01:21:20 | INFO  | Adding tag amphora 2026-03-29 01:21:25.397136 | orchestrator | 2026-03-29 01:21:21 | INFO  | Adding tag os:ubuntu 2026-03-29 01:21:25.397144 | orchestrator | 2026-03-29 01:21:21 | INFO  | Setting property architecture: x86_64 2026-03-29 01:21:25.397152 | orchestrator | 2026-03-29 01:21:21 | INFO  | Setting property hw_disk_bus: scsi 2026-03-29 01:21:25.397160 | orchestrator | 2026-03-29 01:21:21 | INFO  | Setting property hw_rng_model: virtio 2026-03-29 01:21:25.397168 | orchestrator | 2026-03-29 01:21:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-29 01:21:25.397176 | orchestrator | 2026-03-29 01:21:22 | INFO  | Setting property hw_watchdog_action: reset 2026-03-29 01:21:25.397184 | orchestrator | 2026-03-29 01:21:22 | INFO  | Setting property hypervisor_type: qemu 2026-03-29 01:21:25.397192 | orchestrator | 2026-03-29 01:21:22 | INFO  | Setting property os_distro: ubuntu 2026-03-29 01:21:25.397202 | orchestrator | 2026-03-29 01:21:22 | INFO  | Setting property replace_frequency: quarterly 2026-03-29 01:21:25.397216 | orchestrator | 2026-03-29 01:21:23 | INFO  | Setting property uuid_validity: last-1 2026-03-29 01:21:25.397236 | orchestrator | 2026-03-29 01:21:23 | INFO  | Setting property provided_until: none 2026-03-29 01:21:25.397251 | orchestrator | 2026-03-29 01:21:23 | INFO  | Setting property os_purpose: network 2026-03-29 01:21:25.397264 | orchestrator | 2026-03-29 01:21:23 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-29 01:21:25.397290 | orchestrator | 2026-03-29 01:21:23 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-29 01:21:25.397303 | orchestrator | 2026-03-29 01:21:24 | INFO  | Setting property internal_version: 2026-03-28 2026-03-29 01:21:25.397315 | orchestrator | 2026-03-29 01:21:24 | INFO  | Setting property image_original_user: ubuntu 2026-03-29 01:21:25.397328 | orchestrator | 2026-03-29 01:21:24 | INFO  | Setting property os_version: 2026-03-28 2026-03-29 01:21:25.397340 | orchestrator | 2026-03-29 01:21:24 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260328.qcow2 2026-03-29 01:21:25.397353 | orchestrator | 2026-03-29 01:21:24 | INFO  | Setting property image_build_date: 2026-03-28 2026-03-29 01:21:25.397366 | orchestrator | 2026-03-29 01:21:25 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:21:25.397378 | orchestrator | 2026-03-29 01:21:25 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-28' 2026-03-29 01:21:25.397391 | orchestrator | 2026-03-29 01:21:25 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-29 01:21:25.397419 | orchestrator | 2026-03-29 01:21:25 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-29 01:21:25.397434 | orchestrator | 2026-03-29 01:21:25 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-29 01:21:25.397448 | orchestrator | 2026-03-29 01:21:25 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-29 01:21:25.822796 | orchestrator | ok: Runtime: 0:02:57.720025 2026-03-29 01:21:25.838875 | 2026-03-29 01:21:25.838988 | TASK [Run checks] 2026-03-29 01:21:26.604903 | orchestrator | + set -e 2026-03-29 01:21:26.605071 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:21:26.605089 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:21:26.605098 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:21:26.605104 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:21:26.605109 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:21:26.605115 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 01:21:26.605876 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:21:26.612185 | orchestrator | 2026-03-29 01:21:26.612287 | orchestrator | # CHECK 2026-03-29 01:21:26.612294 | orchestrator | 2026-03-29 01:21:26.612299 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:21:26.612307 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:21:26.612311 | orchestrator | + echo 2026-03-29 01:21:26.612315 | orchestrator | + echo '# CHECK' 2026-03-29 01:21:26.612319 | orchestrator | + echo 2026-03-29 01:21:26.612327 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:21:26.613241 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 01:21:26.675353 | orchestrator | 2026-03-29 01:21:26.675435 | orchestrator | ## Containers @ testbed-manager 2026-03-29 01:21:26.675442 | orchestrator | 2026-03-29 01:21:26.675449 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 01:21:26.675453 | orchestrator | + echo 2026-03-29 01:21:26.675458 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-29 01:21:26.675462 | orchestrator | + echo 2026-03-29 01:21:26.675467 | orchestrator | + osism container testbed-manager ps 2026-03-29 01:21:28.668810 | orchestrator | 2026-03-29 01:21:28 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-29 01:21:29.051900 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:21:29.052002 | orchestrator | 7578383e0375 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-03-29 01:21:29.052019 | orchestrator | a0e78eb7bd4d registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-03-29 01:21:29.052026 | orchestrator | 7088e5858131 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-29 01:21:29.052036 | orchestrator | 790f7d9e4206 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:21:29.052043 | orchestrator | f6d75779628e registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-03-29 01:21:29.052054 | orchestrator | 555b285b1560 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2026-03-29 01:21:29.052062 | orchestrator | 2d1310bad04d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2026-03-29 01:21:29.052068 | orchestrator | 508e897124ca registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-29 01:21:29.052092 | orchestrator | 019a579338de registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2026-03-29 01:21:29.052096 | orchestrator | 08ea17ce5628 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2026-03-29 01:21:29.052115 | orchestrator | 3051e07b25d3 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2026-03-29 01:21:29.052119 | orchestrator | 68fddcf06b5e registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2026-03-29 01:21:29.052123 | orchestrator | 92229a7dcdde registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-29 01:21:29.052130 | orchestrator | a08e088e8423 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" About an hour ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2026-03-29 01:21:29.052308 | orchestrator | bea509288f19 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-ansible 2026-03-29 01:21:29.052321 | orchestrator | 5eaa6e2ece6b registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) osism-kubernetes 2026-03-29 01:21:29.053632 | orchestrator | 9131cba65b6c registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) ceph-ansible 2026-03-29 01:21:29.053702 | orchestrator | 0bd58c5cbb72 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" About an hour ago Up 39 minutes (healthy) kolla-ansible 2026-03-29 01:21:29.053714 | orchestrator | 144d5dc290c6 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" About an hour ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2026-03-29 01:21:29.053724 | orchestrator | 4dbdc4700420 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-openstack-1 2026-03-29 01:21:29.053732 | orchestrator | 783a62811523 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-29 01:21:29.053739 | orchestrator | fac622793ade registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" About an hour ago Up 39 minutes (healthy) osismclient 2026-03-29 01:21:29.053768 | orchestrator | 372ceb65f55d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-flower-1 2026-03-29 01:21:29.053775 | orchestrator | a5a7203cb3bf registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-beat-1 2026-03-29 01:21:29.053782 | orchestrator | 8f6213d52680 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" About an hour ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2026-03-29 01:21:29.053789 | orchestrator | 0beb257d7509 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" About an hour ago Up 39 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-29 01:21:29.053795 | orchestrator | f31127754bd0 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" About an hour ago Up 39 minutes (healthy) manager-listener-1 2026-03-29 01:21:29.053813 | orchestrator | 19405a8aa4db registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" About an hour ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2026-03-29 01:21:29.053820 | orchestrator | e543433eaf81 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-29 01:21:29.384813 | orchestrator | 2026-03-29 01:21:29.384903 | orchestrator | ## Images @ testbed-manager 2026-03-29 01:21:29.384912 | orchestrator | 2026-03-29 01:21:29.384916 | orchestrator | + echo 2026-03-29 01:21:29.384921 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-29 01:21:29.384926 | orchestrator | + echo 2026-03-29 01:21:29.384930 | orchestrator | + osism container testbed-manager images 2026-03-29 01:21:31.738849 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:21:31.738911 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 79a5ae258a23 22 hours ago 239MB 2026-03-29 01:21:31.738920 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-29 01:21:31.738926 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-29 01:21:31.738932 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-29 01:21:31.738940 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 01:21:31.738946 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 01:21:31.738953 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 01:21:31.738959 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-29 01:21:31.738965 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 01:21:31.738984 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-29 01:21:31.738990 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-29 01:21:31.738997 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 01:21:31.739003 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-29 01:21:31.739009 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-29 01:21:31.739015 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-29 01:21:31.739022 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-29 01:21:31.739028 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-29 01:21:31.739034 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-29 01:21:31.739040 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-29 01:21:31.739046 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-29 01:21:31.739052 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-29 01:21:31.739058 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-29 01:21:31.739065 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-29 01:21:31.739071 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-29 01:21:32.032220 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:21:32.033247 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 01:21:32.095084 | orchestrator | 2026-03-29 01:21:32.095144 | orchestrator | ## Containers @ testbed-node-0 2026-03-29 01:21:32.095153 | orchestrator | 2026-03-29 01:21:32.095160 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 01:21:32.095167 | orchestrator | + echo 2026-03-29 01:21:32.095173 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-29 01:21:32.095180 | orchestrator | + echo 2026-03-29 01:21:32.095184 | orchestrator | + osism container testbed-node-0 ps 2026-03-29 01:21:34.486248 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:21:34.486353 | orchestrator | 1fafff51a4c2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-03-29 01:21:34.486362 | orchestrator | 2bff51919b06 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_housekeeping 2026-03-29 01:21:34.486368 | orchestrator | 5fba81898a42 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-29 01:21:34.486374 | orchestrator | 0b5eef485040 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes octavia_driver_agent 2026-03-29 01:21:34.486381 | orchestrator | 54f90fb8bfc7 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_api 2026-03-29 01:21:34.486406 | orchestrator | b9a0250c3041 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-29 01:21:34.486412 | orchestrator | 02b4e16749c5 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-29 01:21:34.486419 | orchestrator | 5fd06c494d15 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-29 01:21:34.486425 | orchestrator | 52051bc2d8c0 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-29 01:21:34.486432 | orchestrator | a8de223af76f registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2026-03-29 01:21:34.486437 | orchestrator | 78a7db5b2742 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-29 01:21:34.486441 | orchestrator | b6cc27227c53 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-29 01:21:34.486445 | orchestrator | 4a288f2eab85 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-29 01:21:34.486449 | orchestrator | 2bc3546e3eae registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-29 01:21:34.486453 | orchestrator | 89f0973cc19b registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-29 01:21:34.486457 | orchestrator | 0292d0aef1ef registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-29 01:21:34.486462 | orchestrator | e0311cfbc8e5 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-29 01:21:34.486467 | orchestrator | 5946fba43f97 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-29 01:21:34.486471 | orchestrator | a5baea215101 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-29 01:21:34.486491 | orchestrator | 5fa01400ea4d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:21:34.486496 | orchestrator | 36a5c464c6f3 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-29 01:21:34.486500 | orchestrator | 54071604acd7 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-03-29 01:21:34.486503 | orchestrator | 14d76fbb0ff5 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-03-29 01:21:34.486512 | orchestrator | fb8869bd9d2b registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-29 01:21:34.486516 | orchestrator | 53be0a63fae7 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-29 01:21:34.486520 | orchestrator | 90255a85afc1 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-29 01:21:34.486523 | orchestrator | 9f4b23416cbc registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-03-29 01:21:34.486530 | orchestrator | ce0f2c0c6103 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2026-03-29 01:21:34.486534 | orchestrator | 422c7365c60e registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-29 01:21:34.486538 | orchestrator | 4ae9b25fa1a4 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-29 01:21:34.486564 | orchestrator | e3e732b51664 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-29 01:21:34.486569 | orchestrator | 7f36eac48d2e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-29 01:21:34.486573 | orchestrator | 468fb6c17f1d registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-29 01:21:34.486576 | orchestrator | 0809444ceebd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2026-03-29 01:21:34.486580 | orchestrator | afdcb88af503 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-29 01:21:34.486584 | orchestrator | f99dce217edb registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-29 01:21:34.486588 | orchestrator | 7809f046f7aa registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-29 01:21:34.486591 | orchestrator | 1017b47ed9b3 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-29 01:21:34.486595 | orchestrator | 8968d7a2773a registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-29 01:21:34.486599 | orchestrator | cc86854ee58b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch_dashboards 2026-03-29 01:21:34.486608 | orchestrator | 4b511fca6cf0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-0 2026-03-29 01:21:34.486612 | orchestrator | a9024f93dc2c registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) opensearch 2026-03-29 01:21:34.486630 | orchestrator | 69db4678a87d registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-03-29 01:21:34.486634 | orchestrator | d34c276660a0 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-29 01:21:34.486638 | orchestrator | c36e71dc8685 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-29 01:21:34.486642 | orchestrator | d724bd2f51cd registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_northd 2026-03-29 01:21:34.486646 | orchestrator | a5e23a8bb79f registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_sb_db 2026-03-29 01:21:34.486649 | orchestrator | 7d9342b29ef4 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-03-29 01:21:34.486653 | orchestrator | a5ec47bd76e4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2026-03-29 01:21:34.486657 | orchestrator | d3717a88dfd8 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes ovn_controller 2026-03-29 01:21:34.486661 | orchestrator | d773e8e7d525 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2026-03-29 01:21:34.486664 | orchestrator | 1b633dbb03ad registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2026-03-29 01:21:34.486668 | orchestrator | a2ec03887084 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-03-29 01:21:34.486672 | orchestrator | 79127952dad6 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-03-29 01:21:34.486675 | orchestrator | 1305865cabcd registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-03-29 01:21:34.486679 | orchestrator | c54ba8e6443c registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-03-29 01:21:34.486685 | orchestrator | 189fcb982bfe registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-03-29 01:21:34.486689 | orchestrator | 436288e74572 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-29 01:21:34.486693 | orchestrator | 4c98255e7ace registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-03-29 01:21:34.791299 | orchestrator | 2026-03-29 01:21:34.791400 | orchestrator | ## Images @ testbed-node-0 2026-03-29 01:21:34.791410 | orchestrator | 2026-03-29 01:21:34.791417 | orchestrator | + echo 2026-03-29 01:21:34.791451 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-29 01:21:34.791460 | orchestrator | + echo 2026-03-29 01:21:34.791467 | orchestrator | + osism container testbed-node-0 images 2026-03-29 01:21:37.169896 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:21:37.169980 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-29 01:21:37.169988 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-29 01:21:37.169994 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-29 01:21:37.169999 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-29 01:21:37.170003 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-29 01:21:37.170008 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 01:21:37.170056 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 01:21:37.170065 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-29 01:21:37.170074 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-29 01:21:37.170083 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-29 01:21:37.170091 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 01:21:37.170098 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-29 01:21:37.170106 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-29 01:21:37.170114 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-29 01:21:37.170122 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-29 01:21:37.170130 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-29 01:21:37.170139 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-29 01:21:37.170147 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 01:21:37.170174 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-29 01:21:37.170183 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 01:21:37.170188 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-29 01:21:37.170193 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-29 01:21:37.170198 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-29 01:21:37.170202 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-29 01:21:37.170211 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-29 01:21:37.170255 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-29 01:21:37.170267 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-29 01:21:37.170274 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-29 01:21:37.170281 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-29 01:21:37.170288 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-29 01:21:37.170295 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-29 01:21:37.170319 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-29 01:21:37.170327 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-29 01:21:37.170334 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-29 01:21:37.170339 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-29 01:21:37.170343 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-29 01:21:37.170350 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-29 01:21:37.170357 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-29 01:21:37.170364 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-29 01:21:37.170370 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-29 01:21:37.170377 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-29 01:21:37.170384 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-29 01:21:37.170391 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-29 01:21:37.170398 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-29 01:21:37.170404 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-29 01:21:37.170412 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-29 01:21:37.170419 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-29 01:21:37.170428 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-29 01:21:37.170433 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-29 01:21:37.170437 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-29 01:21:37.170441 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-29 01:21:37.170453 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-29 01:21:37.170458 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-29 01:21:37.170463 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-29 01:21:37.170468 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-29 01:21:37.170473 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-29 01:21:37.170479 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-29 01:21:37.170484 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-29 01:21:37.170489 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-29 01:21:37.170494 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-29 01:21:37.170500 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-29 01:21:37.170505 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-29 01:21:37.170510 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-29 01:21:37.170520 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-29 01:21:37.170525 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-29 01:21:37.463826 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:21:37.464021 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 01:21:37.512697 | orchestrator | 2026-03-29 01:21:37.512794 | orchestrator | ## Containers @ testbed-node-1 2026-03-29 01:21:37.512803 | orchestrator | 2026-03-29 01:21:37.512807 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 01:21:37.512812 | orchestrator | + echo 2026-03-29 01:21:37.512817 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-29 01:21:37.512822 | orchestrator | + echo 2026-03-29 01:21:37.512828 | orchestrator | + osism container testbed-node-1 ps 2026-03-29 01:21:39.844897 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:21:39.844981 | orchestrator | 06d80a46ddc0 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-03-29 01:21:39.844992 | orchestrator | 34ae54bbb39c registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-29 01:21:39.844999 | orchestrator | 90ae0506cdc8 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-29 01:21:39.845005 | orchestrator | 5ed3f86ddf96 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes octavia_driver_agent 2026-03-29 01:21:39.845034 | orchestrator | 2e82a837074e registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_api 2026-03-29 01:21:39.845058 | orchestrator | bab8e8414a7e registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-29 01:21:39.845063 | orchestrator | 97b7521c645e registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-29 01:21:39.845066 | orchestrator | 21ab1f616f71 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-29 01:21:39.845071 | orchestrator | 8054593fae4c registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-03-29 01:21:39.845075 | orchestrator | 7ef175a1b653 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-29 01:21:39.845078 | orchestrator | 40e81138fc44 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-29 01:21:39.845085 | orchestrator | 1da33e28fbe5 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-29 01:21:39.845089 | orchestrator | 919e32fd82cf registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-29 01:21:39.845093 | orchestrator | 10a12ac0fc15 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-29 01:21:39.845097 | orchestrator | ffd66a631945 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-29 01:21:39.845100 | orchestrator | 1c38e8f41f1c registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-29 01:21:39.845106 | orchestrator | e8c9c8b181c5 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-29 01:21:39.845110 | orchestrator | f93d62d2f0e2 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-29 01:21:39.845114 | orchestrator | e365870b4e3a registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-29 01:21:39.845128 | orchestrator | 95d9595a6364 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:21:39.845132 | orchestrator | 8eda22348a84 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-29 01:21:39.845136 | orchestrator | cc46c6db13eb registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-03-29 01:21:39.845140 | orchestrator | d9ede876ebcb registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-03-29 01:21:39.845148 | orchestrator | 1a5902025014 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-29 01:21:39.845151 | orchestrator | a8b676bb5024 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-29 01:21:39.845155 | orchestrator | 57e3e9915345 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-29 01:21:39.845162 | orchestrator | a0660eeeabb7 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-03-29 01:21:39.845166 | orchestrator | f50b03f0fbd2 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_api 2026-03-29 01:21:39.845169 | orchestrator | 5909e693031a registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) placement_api 2026-03-29 01:21:39.845173 | orchestrator | b7b07540d1bf registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-29 01:21:39.845177 | orchestrator | 286e5791f502 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-29 01:21:39.845181 | orchestrator | 4aa4694dd915 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-29 01:21:39.845184 | orchestrator | b2ab7fb8d103 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-29 01:21:39.845188 | orchestrator | f8dc183f05cc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-1 2026-03-29 01:21:39.845192 | orchestrator | c4ca426bb6f1 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-29 01:21:39.845195 | orchestrator | 735499ca6aed registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-29 01:21:39.845199 | orchestrator | abd8fce5cfd8 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-29 01:21:39.845203 | orchestrator | 6df610d214b5 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-29 01:21:39.845206 | orchestrator | 39bd8b91a2f1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-29 01:21:39.845210 | orchestrator | f9ad07e07234 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 23 minutes ago Up 23 minutes (healthy) mariadb 2026-03-29 01:21:39.845218 | orchestrator | 75051667b1cb registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-03-29 01:21:39.845225 | orchestrator | bcc43c3b116d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2026-03-29 01:21:39.845229 | orchestrator | 061260ca29d4 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2026-03-29 01:21:39.845233 | orchestrator | 4b2cdaaa9ea2 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-29 01:21:39.845236 | orchestrator | e6d9a83ab557 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-29 01:21:39.845240 | orchestrator | c31510f2eb1e registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2026-03-29 01:21:39.845244 | orchestrator | f524f96eb567 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2026-03-29 01:21:39.845247 | orchestrator | 37efac5d55fd registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2026-03-29 01:21:39.845253 | orchestrator | f4e4e2d790dc registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2026-03-29 01:21:39.845259 | orchestrator | a6cc0e6592ba registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-03-29 01:21:39.845264 | orchestrator | e5a7e8fec5b2 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-03-29 01:21:39.845275 | orchestrator | 7fd34b18214f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-03-29 01:21:39.845284 | orchestrator | 23f4ae309078 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-03-29 01:21:39.845295 | orchestrator | e603770cf8d0 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-03-29 01:21:39.845300 | orchestrator | 0adc3634111d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-03-29 01:21:39.845306 | orchestrator | 9d14822471e4 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-03-29 01:21:39.845312 | orchestrator | 07f89db50084 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-03-29 01:21:39.845317 | orchestrator | 78535e09a101 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-29 01:21:39.845324 | orchestrator | 816343dd66ce registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes fluentd 2026-03-29 01:21:40.158271 | orchestrator | 2026-03-29 01:21:40.158341 | orchestrator | ## Images @ testbed-node-1 2026-03-29 01:21:40.158365 | orchestrator | 2026-03-29 01:21:40.158370 | orchestrator | + echo 2026-03-29 01:21:40.158374 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-29 01:21:40.158379 | orchestrator | + echo 2026-03-29 01:21:40.158383 | orchestrator | + osism container testbed-node-1 images 2026-03-29 01:21:42.616863 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:21:42.616925 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-29 01:21:42.616935 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-29 01:21:42.616940 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-29 01:21:42.616944 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-29 01:21:42.616948 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-29 01:21:42.616952 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 01:21:42.616956 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 01:21:42.616959 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-29 01:21:42.616963 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-29 01:21:42.616967 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-29 01:21:42.616971 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 01:21:42.616974 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-29 01:21:42.616978 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-29 01:21:42.616982 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-29 01:21:42.616986 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-29 01:21:42.616990 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-29 01:21:42.616993 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-29 01:21:42.616997 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 01:21:42.617001 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-29 01:21:42.617005 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 01:21:42.617008 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-29 01:21:42.617012 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-29 01:21:42.617016 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-29 01:21:42.617019 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-29 01:21:42.617035 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-29 01:21:42.617039 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-29 01:21:42.617154 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-29 01:21:42.617196 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-29 01:21:42.617203 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-29 01:21:42.617218 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-29 01:21:42.617224 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-29 01:21:42.617228 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-29 01:21:42.617233 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-29 01:21:42.617238 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-29 01:21:42.617243 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-29 01:21:42.617248 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-29 01:21:42.617252 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-29 01:21:42.617257 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-29 01:21:42.617262 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-29 01:21:42.617266 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-29 01:21:42.617271 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-29 01:21:42.617276 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-29 01:21:42.617281 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-29 01:21:42.617288 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-29 01:21:42.617293 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-29 01:21:42.617297 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-29 01:21:42.617302 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-29 01:21:42.617307 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-29 01:21:42.617312 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-29 01:21:42.617316 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-29 01:21:42.617331 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-29 01:21:42.617336 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-29 01:21:42.617341 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-29 01:21:42.617345 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-29 01:21:42.617350 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-29 01:21:42.617355 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-29 01:21:42.617360 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-29 01:21:42.934180 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-29 01:21:42.934359 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 01:21:42.982622 | orchestrator | 2026-03-29 01:21:42.982690 | orchestrator | ## Containers @ testbed-node-2 2026-03-29 01:21:42.982700 | orchestrator | 2026-03-29 01:21:42.982708 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 01:21:42.982716 | orchestrator | + echo 2026-03-29 01:21:42.982723 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-29 01:21:42.982731 | orchestrator | + echo 2026-03-29 01:21:42.982786 | orchestrator | + osism container testbed-node-2 ps 2026-03-29 01:21:45.412977 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-29 01:21:45.413064 | orchestrator | 125c9ca43e62 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_worker 2026-03-29 01:21:45.413072 | orchestrator | 8ddb19de3569 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_housekeeping 2026-03-29 01:21:45.413077 | orchestrator | 9cd7a302cef6 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_health_manager 2026-03-29 01:21:45.413080 | orchestrator | 1fce3453a4d5 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes octavia_driver_agent 2026-03-29 01:21:45.413084 | orchestrator | c1e42681ad3e registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes (healthy) octavia_api 2026-03-29 01:21:45.413088 | orchestrator | ea4613e77d00 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_novncproxy 2026-03-29 01:21:45.413092 | orchestrator | 7beedca3838f registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2026-03-29 01:21:45.413096 | orchestrator | b1c479415507 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2026-03-29 01:21:45.413100 | orchestrator | 4c4b6cc91d5a registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-03-29 01:21:45.413104 | orchestrator | f1e4a6b13fd4 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_scheduler 2026-03-29 01:21:45.413161 | orchestrator | 8304dd8fc157 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-03-29 01:21:45.413178 | orchestrator | 7f758c1aef2f registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-03-29 01:21:45.413182 | orchestrator | ca9b41796755 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) glance_api 2026-03-29 01:21:45.413186 | orchestrator | bda05ea42ea5 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_scheduler 2026-03-29 01:21:45.413190 | orchestrator | 551e44dac724 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-03-29 01:21:45.413194 | orchestrator | 07577237ed13 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-03-29 01:21:45.413200 | orchestrator | 6ebeb45df144 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2026-03-29 01:21:45.413204 | orchestrator | 34772831fc30 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2026-03-29 01:21:45.413208 | orchestrator | 6ed679ea9db6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-03-29 01:21:45.413223 | orchestrator | 0bed377b5962 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-03-29 01:21:45.413227 | orchestrator | e3a5e33fcf49 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_conductor 2026-03-29 01:21:45.413230 | orchestrator | 71dc8a7defb5 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) magnum_api 2026-03-29 01:21:45.413234 | orchestrator | 74589cc635b9 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) neutron_server 2026-03-29 01:21:45.413238 | orchestrator | f7c0b66003fb registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_worker 2026-03-29 01:21:45.413242 | orchestrator | 64718cbfaed4 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_mdns 2026-03-29 01:21:45.413245 | orchestrator | 5b3d74501336 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_producer 2026-03-29 01:21:45.413249 | orchestrator | 0fa55d40416b registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_central 2026-03-29 01:21:45.413253 | orchestrator | a419ec0d3531 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_api 2026-03-29 01:21:45.413257 | orchestrator | 5f386be1587b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) placement_api 2026-03-29 01:21:45.413263 | orchestrator | d62e65986af0 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) designate_backend_bind9 2026-03-29 01:21:45.413267 | orchestrator | 2e3a22370beb registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_worker 2026-03-29 01:21:45.413271 | orchestrator | 3d7c0b52b689 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_keystone_listener 2026-03-29 01:21:45.413274 | orchestrator | 47323937a729 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2026-03-29 01:21:45.413278 | orchestrator | 75172a123de9 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) barbican_api 2026-03-29 01:21:45.413282 | orchestrator | 0673009711c1 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone 2026-03-29 01:21:45.413285 | orchestrator | c41b6e55e42e registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2026-03-29 01:21:45.413289 | orchestrator | 7dba8c60bc1d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_fernet 2026-03-29 01:21:45.413292 | orchestrator | 9f3b51583be3 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) keystone_ssh 2026-03-29 01:21:45.413296 | orchestrator | 3380fc43d72e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2026-03-29 01:21:45.413300 | orchestrator | 8e0ada72323d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2026-03-29 01:21:45.413307 | orchestrator | c7c1767e6631 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2026-03-29 01:21:45.413311 | orchestrator | dfa993ab53dd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2026-03-29 01:21:45.413315 | orchestrator | a1234fdb931b registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes keepalived 2026-03-29 01:21:45.413319 | orchestrator | 15b81ab9cb3c registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) proxysql 2026-03-29 01:21:45.413322 | orchestrator | 6e8541301744 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) haproxy 2026-03-29 01:21:45.413326 | orchestrator | 72148ccd9c71 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_northd 2026-03-29 01:21:45.413334 | orchestrator | b0f89f60e4f0 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2026-03-29 01:21:45.413342 | orchestrator | 66cabf7baf1b registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_nb_db 2026-03-29 01:21:45.413346 | orchestrator | bf31a44a5fa6 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2026-03-29 01:21:45.413350 | orchestrator | bfdee89cf998 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2026-03-29 01:21:45.413354 | orchestrator | 858527e03b20 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2026-03-29 01:21:45.413360 | orchestrator | ea6c3a4224af registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_vswitchd 2026-03-29 01:21:45.413364 | orchestrator | 61aed3127bf0 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2026-03-29 01:21:45.413368 | orchestrator | 36e493fa61aa registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2026-03-29 01:21:45.413372 | orchestrator | c2d78815f93a registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2026-03-29 01:21:45.413375 | orchestrator | 4d7813aa495d registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2026-03-29 01:21:45.413379 | orchestrator | 7feb221ee535 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2026-03-29 01:21:45.413383 | orchestrator | d291741ad50c registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2026-03-29 01:21:45.413387 | orchestrator | 023afd9d2cc5 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2026-03-29 01:21:45.704912 | orchestrator | 2026-03-29 01:21:45.705006 | orchestrator | ## Images @ testbed-node-2 2026-03-29 01:21:45.705017 | orchestrator | 2026-03-29 01:21:45.705024 | orchestrator | + echo 2026-03-29 01:21:45.705032 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-29 01:21:45.705040 | orchestrator | + echo 2026-03-29 01:21:45.705047 | orchestrator | + osism container testbed-node-2 images 2026-03-29 01:21:48.032985 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-29 01:21:48.033088 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-29 01:21:48.033100 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-29 01:21:48.033107 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-29 01:21:48.033114 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-29 01:21:48.033120 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-29 01:21:48.033127 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-29 01:21:48.033154 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-29 01:21:48.033161 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-29 01:21:48.033168 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-29 01:21:48.033175 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-29 01:21:48.033182 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-29 01:21:48.033188 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-29 01:21:48.033194 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-29 01:21:48.033200 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-29 01:21:48.033206 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-29 01:21:48.033213 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-29 01:21:48.033219 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-29 01:21:48.033225 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-29 01:21:48.033231 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-29 01:21:48.033237 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-29 01:21:48.033244 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-29 01:21:48.033251 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-29 01:21:48.033257 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-29 01:21:48.033263 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-29 01:21:48.033270 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-29 01:21:48.033276 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-29 01:21:48.033282 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-29 01:21:48.033289 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-29 01:21:48.033295 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-29 01:21:48.033318 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-29 01:21:48.033325 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-29 01:21:48.033348 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-29 01:21:48.033361 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-29 01:21:48.033367 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-29 01:21:48.033374 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-29 01:21:48.033380 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-29 01:21:48.033387 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-29 01:21:48.033393 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-29 01:21:48.033399 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-29 01:21:48.033405 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-29 01:21:48.033426 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-29 01:21:48.033432 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-29 01:21:48.033439 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-29 01:21:48.033444 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-29 01:21:48.033448 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-29 01:21:48.033452 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-29 01:21:48.033456 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-29 01:21:48.033460 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-29 01:21:48.033463 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-29 01:21:48.033467 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-29 01:21:48.033471 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-29 01:21:48.033478 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-29 01:21:48.033481 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-29 01:21:48.033485 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-29 01:21:48.033489 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-29 01:21:48.033492 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-29 01:21:48.033496 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-29 01:21:48.348059 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-29 01:21:48.356373 | orchestrator | + set -e 2026-03-29 01:21:48.356477 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:21:48.357438 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:21:48.357496 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:21:48.357507 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:21:48.357514 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:21:48.357524 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:21:48.357531 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:21:48.357538 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:21:48.357544 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:21:48.357550 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:21:48.357556 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:21:48.357562 | orchestrator | ++ export ARA=false 2026-03-29 01:21:48.357568 | orchestrator | ++ ARA=false 2026-03-29 01:21:48.357646 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:21:48.357657 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:21:48.357663 | orchestrator | ++ export TEMPEST=true 2026-03-29 01:21:48.357669 | orchestrator | ++ TEMPEST=true 2026-03-29 01:21:48.357674 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:21:48.357680 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:21:48.357686 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 01:21:48.357693 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 01:21:48.357700 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:21:48.357706 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:21:48.357712 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:21:48.357719 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:21:48.357725 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:21:48.357732 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:21:48.357739 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:21:48.357745 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:21:48.357751 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-29 01:21:48.357758 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-29 01:21:48.367629 | orchestrator | + set -e 2026-03-29 01:21:48.367691 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:21:48.368514 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:21:48.368539 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:21:48.368546 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:21:48.368553 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:21:48.368560 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 01:21:48.368568 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:21:48.371694 | orchestrator | 2026-03-29 01:21:48.371731 | orchestrator | # Ceph status 2026-03-29 01:21:48.371737 | orchestrator | 2026-03-29 01:21:48.371742 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:21:48.371747 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:21:48.371752 | orchestrator | + echo 2026-03-29 01:21:48.371756 | orchestrator | + echo '# Ceph status' 2026-03-29 01:21:48.371760 | orchestrator | + echo 2026-03-29 01:21:48.371763 | orchestrator | + ceph -s 2026-03-29 01:21:48.953436 | orchestrator | cluster: 2026-03-29 01:21:48.953490 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-29 01:21:48.953499 | orchestrator | health: HEALTH_OK 2026-03-29 01:21:48.953505 | orchestrator | 2026-03-29 01:21:48.953512 | orchestrator | services: 2026-03-29 01:21:48.953518 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2026-03-29 01:21:48.953532 | orchestrator | mgr: testbed-node-2(active, since 17m), standbys: testbed-node-1, testbed-node-0 2026-03-29 01:21:48.953540 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-29 01:21:48.953546 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 25m) 2026-03-29 01:21:48.953553 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-29 01:21:48.953560 | orchestrator | 2026-03-29 01:21:48.953566 | orchestrator | data: 2026-03-29 01:21:48.953573 | orchestrator | volumes: 1/1 healthy 2026-03-29 01:21:48.953589 | orchestrator | pools: 14 pools, 401 pgs 2026-03-29 01:21:48.953593 | orchestrator | objects: 555 objects, 2.2 GiB 2026-03-29 01:21:48.953597 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-29 01:21:48.953601 | orchestrator | pgs: 401 active+clean 2026-03-29 01:21:48.953605 | orchestrator | 2026-03-29 01:21:49.013051 | orchestrator | 2026-03-29 01:21:49.013106 | orchestrator | # Ceph versions 2026-03-29 01:21:49.013115 | orchestrator | 2026-03-29 01:21:49.013122 | orchestrator | + echo 2026-03-29 01:21:49.013128 | orchestrator | + echo '# Ceph versions' 2026-03-29 01:21:49.013135 | orchestrator | + echo 2026-03-29 01:21:49.013157 | orchestrator | + ceph versions 2026-03-29 01:21:49.578819 | orchestrator | { 2026-03-29 01:21:49.578868 | orchestrator | "mon": { 2026-03-29 01:21:49.578874 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 01:21:49.578878 | orchestrator | }, 2026-03-29 01:21:49.578882 | orchestrator | "mgr": { 2026-03-29 01:21:49.578886 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 01:21:49.578890 | orchestrator | }, 2026-03-29 01:21:49.578894 | orchestrator | "osd": { 2026-03-29 01:21:49.578898 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-29 01:21:49.578902 | orchestrator | }, 2026-03-29 01:21:49.578905 | orchestrator | "mds": { 2026-03-29 01:21:49.578909 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 01:21:49.578913 | orchestrator | }, 2026-03-29 01:21:49.578917 | orchestrator | "rgw": { 2026-03-29 01:21:49.578921 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-29 01:21:49.578924 | orchestrator | }, 2026-03-29 01:21:49.578928 | orchestrator | "overall": { 2026-03-29 01:21:49.578932 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-29 01:21:49.578936 | orchestrator | } 2026-03-29 01:21:49.578940 | orchestrator | } 2026-03-29 01:21:49.621795 | orchestrator | 2026-03-29 01:21:49.621844 | orchestrator | # Ceph OSD tree 2026-03-29 01:21:49.621849 | orchestrator | 2026-03-29 01:21:49.621854 | orchestrator | + echo 2026-03-29 01:21:49.621858 | orchestrator | + echo '# Ceph OSD tree' 2026-03-29 01:21:49.621862 | orchestrator | + echo 2026-03-29 01:21:49.621919 | orchestrator | + ceph osd df tree 2026-03-29 01:21:50.111651 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-29 01:21:50.111703 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2026-03-29 01:21:50.111711 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2026-03-29 01:21:50.111717 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.96 1.01 190 up osd.0 2026-03-29 01:21:50.111722 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.87 0.99 202 up osd.4 2026-03-29 01:21:50.111727 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2026-03-29 01:21:50.111732 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.28 1.23 188 up osd.2 2026-03-29 01:21:50.111738 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 932 MiB 859 MiB 1 KiB 74 MiB 19 GiB 4.56 0.77 200 up osd.3 2026-03-29 01:21:50.111744 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-03-29 01:21:50.111750 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.61 1.12 209 up osd.1 2026-03-29 01:21:50.111755 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 995 MiB 1 KiB 74 MiB 19 GiB 5.22 0.88 181 up osd.5 2026-03-29 01:21:50.111761 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2026-03-29 01:21:50.111770 | orchestrator | MIN/MAX VAR: 0.77/1.23 STDDEV: 0.88 2026-03-29 01:21:50.152449 | orchestrator | 2026-03-29 01:21:50.152492 | orchestrator | # Ceph monitor status 2026-03-29 01:21:50.152497 | orchestrator | 2026-03-29 01:21:50.152500 | orchestrator | + echo 2026-03-29 01:21:50.152504 | orchestrator | + echo '# Ceph monitor status' 2026-03-29 01:21:50.152507 | orchestrator | + echo 2026-03-29 01:21:50.152511 | orchestrator | + ceph mon stat 2026-03-29 01:21:50.731914 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-29 01:21:50.788811 | orchestrator | 2026-03-29 01:21:50.788878 | orchestrator | # Ceph quorum status 2026-03-29 01:21:50.788883 | orchestrator | 2026-03-29 01:21:50.788887 | orchestrator | + echo 2026-03-29 01:21:50.788891 | orchestrator | + echo '# Ceph quorum status' 2026-03-29 01:21:50.788894 | orchestrator | + echo 2026-03-29 01:21:50.789884 | orchestrator | + ceph quorum_status 2026-03-29 01:21:50.789919 | orchestrator | + jq 2026-03-29 01:21:51.440124 | orchestrator | { 2026-03-29 01:21:51.440175 | orchestrator | "election_epoch": 8, 2026-03-29 01:21:51.440181 | orchestrator | "quorum": [ 2026-03-29 01:21:51.440186 | orchestrator | 0, 2026-03-29 01:21:51.440190 | orchestrator | 1, 2026-03-29 01:21:51.440193 | orchestrator | 2 2026-03-29 01:21:51.440197 | orchestrator | ], 2026-03-29 01:21:51.440201 | orchestrator | "quorum_names": [ 2026-03-29 01:21:51.440205 | orchestrator | "testbed-node-0", 2026-03-29 01:21:51.440208 | orchestrator | "testbed-node-1", 2026-03-29 01:21:51.440212 | orchestrator | "testbed-node-2" 2026-03-29 01:21:51.440216 | orchestrator | ], 2026-03-29 01:21:51.440220 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-29 01:21:51.440224 | orchestrator | "quorum_age": 1732, 2026-03-29 01:21:51.440228 | orchestrator | "features": { 2026-03-29 01:21:51.440232 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-29 01:21:51.440235 | orchestrator | "quorum_mon": [ 2026-03-29 01:21:51.440239 | orchestrator | "kraken", 2026-03-29 01:21:51.440243 | orchestrator | "luminous", 2026-03-29 01:21:51.440247 | orchestrator | "mimic", 2026-03-29 01:21:51.440250 | orchestrator | "osdmap-prune", 2026-03-29 01:21:51.440254 | orchestrator | "nautilus", 2026-03-29 01:21:51.440257 | orchestrator | "octopus", 2026-03-29 01:21:51.440261 | orchestrator | "pacific", 2026-03-29 01:21:51.440265 | orchestrator | "elector-pinging", 2026-03-29 01:21:51.440268 | orchestrator | "quincy", 2026-03-29 01:21:51.440272 | orchestrator | "reef" 2026-03-29 01:21:51.440276 | orchestrator | ] 2026-03-29 01:21:51.440279 | orchestrator | }, 2026-03-29 01:21:51.440283 | orchestrator | "monmap": { 2026-03-29 01:21:51.440287 | orchestrator | "epoch": 1, 2026-03-29 01:21:51.440291 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-29 01:21:51.440295 | orchestrator | "modified": "2026-03-29T00:52:40.124132Z", 2026-03-29 01:21:51.440298 | orchestrator | "created": "2026-03-29T00:52:40.124132Z", 2026-03-29 01:21:51.440302 | orchestrator | "min_mon_release": 18, 2026-03-29 01:21:51.440306 | orchestrator | "min_mon_release_name": "reef", 2026-03-29 01:21:51.440309 | orchestrator | "election_strategy": 1, 2026-03-29 01:21:51.440313 | orchestrator | "disallowed_leaders: ": "", 2026-03-29 01:21:51.440317 | orchestrator | "stretch_mode": false, 2026-03-29 01:21:51.440321 | orchestrator | "tiebreaker_mon": "", 2026-03-29 01:21:51.440324 | orchestrator | "removed_ranks: ": "", 2026-03-29 01:21:51.440328 | orchestrator | "features": { 2026-03-29 01:21:51.440331 | orchestrator | "persistent": [ 2026-03-29 01:21:51.440335 | orchestrator | "kraken", 2026-03-29 01:21:51.440339 | orchestrator | "luminous", 2026-03-29 01:21:51.440342 | orchestrator | "mimic", 2026-03-29 01:21:51.440346 | orchestrator | "osdmap-prune", 2026-03-29 01:21:51.440350 | orchestrator | "nautilus", 2026-03-29 01:21:51.440354 | orchestrator | "octopus", 2026-03-29 01:21:51.440358 | orchestrator | "pacific", 2026-03-29 01:21:51.440361 | orchestrator | "elector-pinging", 2026-03-29 01:21:51.440365 | orchestrator | "quincy", 2026-03-29 01:21:51.440369 | orchestrator | "reef" 2026-03-29 01:21:51.440372 | orchestrator | ], 2026-03-29 01:21:51.440376 | orchestrator | "optional": [] 2026-03-29 01:21:51.440380 | orchestrator | }, 2026-03-29 01:21:51.440383 | orchestrator | "mons": [ 2026-03-29 01:21:51.440387 | orchestrator | { 2026-03-29 01:21:51.440391 | orchestrator | "rank": 0, 2026-03-29 01:21:51.440394 | orchestrator | "name": "testbed-node-0", 2026-03-29 01:21:51.440398 | orchestrator | "public_addrs": { 2026-03-29 01:21:51.440402 | orchestrator | "addrvec": [ 2026-03-29 01:21:51.440406 | orchestrator | { 2026-03-29 01:21:51.440409 | orchestrator | "type": "v2", 2026-03-29 01:21:51.440413 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-29 01:21:51.440417 | orchestrator | "nonce": 0 2026-03-29 01:21:51.440420 | orchestrator | }, 2026-03-29 01:21:51.440424 | orchestrator | { 2026-03-29 01:21:51.440435 | orchestrator | "type": "v1", 2026-03-29 01:21:51.440439 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-29 01:21:51.440443 | orchestrator | "nonce": 0 2026-03-29 01:21:51.440446 | orchestrator | } 2026-03-29 01:21:51.440455 | orchestrator | ] 2026-03-29 01:21:51.440470 | orchestrator | }, 2026-03-29 01:21:51.440474 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-29 01:21:51.440478 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-29 01:21:51.440482 | orchestrator | "priority": 0, 2026-03-29 01:21:51.440485 | orchestrator | "weight": 0, 2026-03-29 01:21:51.440497 | orchestrator | "crush_location": "{}" 2026-03-29 01:21:51.440501 | orchestrator | }, 2026-03-29 01:21:51.440505 | orchestrator | { 2026-03-29 01:21:51.440509 | orchestrator | "rank": 1, 2026-03-29 01:21:51.440512 | orchestrator | "name": "testbed-node-1", 2026-03-29 01:21:51.440516 | orchestrator | "public_addrs": { 2026-03-29 01:21:51.440520 | orchestrator | "addrvec": [ 2026-03-29 01:21:51.440523 | orchestrator | { 2026-03-29 01:21:51.440527 | orchestrator | "type": "v2", 2026-03-29 01:21:51.440531 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-29 01:21:51.440535 | orchestrator | "nonce": 0 2026-03-29 01:21:51.440538 | orchestrator | }, 2026-03-29 01:21:51.440542 | orchestrator | { 2026-03-29 01:21:51.440546 | orchestrator | "type": "v1", 2026-03-29 01:21:51.440549 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-29 01:21:51.440553 | orchestrator | "nonce": 0 2026-03-29 01:21:51.440557 | orchestrator | } 2026-03-29 01:21:51.440560 | orchestrator | ] 2026-03-29 01:21:51.440564 | orchestrator | }, 2026-03-29 01:21:51.440568 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-29 01:21:51.440571 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-29 01:21:51.440575 | orchestrator | "priority": 0, 2026-03-29 01:21:51.440579 | orchestrator | "weight": 0, 2026-03-29 01:21:51.440629 | orchestrator | "crush_location": "{}" 2026-03-29 01:21:51.440635 | orchestrator | }, 2026-03-29 01:21:51.440639 | orchestrator | { 2026-03-29 01:21:51.440643 | orchestrator | "rank": 2, 2026-03-29 01:21:51.440646 | orchestrator | "name": "testbed-node-2", 2026-03-29 01:21:51.440650 | orchestrator | "public_addrs": { 2026-03-29 01:21:51.440654 | orchestrator | "addrvec": [ 2026-03-29 01:21:51.440657 | orchestrator | { 2026-03-29 01:21:51.440661 | orchestrator | "type": "v2", 2026-03-29 01:21:51.440665 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-29 01:21:51.440668 | orchestrator | "nonce": 0 2026-03-29 01:21:51.440672 | orchestrator | }, 2026-03-29 01:21:51.440677 | orchestrator | { 2026-03-29 01:21:51.440681 | orchestrator | "type": "v1", 2026-03-29 01:21:51.440688 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-29 01:21:51.440692 | orchestrator | "nonce": 0 2026-03-29 01:21:51.440696 | orchestrator | } 2026-03-29 01:21:51.440701 | orchestrator | ] 2026-03-29 01:21:51.440705 | orchestrator | }, 2026-03-29 01:21:51.440709 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-29 01:21:51.440713 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-29 01:21:51.440718 | orchestrator | "priority": 0, 2026-03-29 01:21:51.440722 | orchestrator | "weight": 0, 2026-03-29 01:21:51.440728 | orchestrator | "crush_location": "{}" 2026-03-29 01:21:51.440735 | orchestrator | } 2026-03-29 01:21:51.440741 | orchestrator | ] 2026-03-29 01:21:51.440747 | orchestrator | } 2026-03-29 01:21:51.440754 | orchestrator | } 2026-03-29 01:21:51.440836 | orchestrator | 2026-03-29 01:21:51.440845 | orchestrator | # Ceph free space status 2026-03-29 01:21:51.440850 | orchestrator | 2026-03-29 01:21:51.440854 | orchestrator | + echo 2026-03-29 01:21:51.440858 | orchestrator | + echo '# Ceph free space status' 2026-03-29 01:21:51.440863 | orchestrator | + echo 2026-03-29 01:21:51.440867 | orchestrator | + ceph df 2026-03-29 01:21:52.045331 | orchestrator | --- RAW STORAGE --- 2026-03-29 01:21:52.045389 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-29 01:21:52.045404 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-29 01:21:52.045410 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2026-03-29 01:21:52.045416 | orchestrator | 2026-03-29 01:21:52.045422 | orchestrator | --- POOLS --- 2026-03-29 01:21:52.045427 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-29 01:21:52.045431 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-29 01:21:52.045434 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:21:52.045437 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-29 01:21:52.045440 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:21:52.045476 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:21:52.045480 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-29 01:21:52.045483 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-29 01:21:52.045486 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-29 01:21:52.045489 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-03-29 01:21:52.045492 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:21:52.045495 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:21:52.045498 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-03-29 01:21:52.045501 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:21:52.045504 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-29 01:21:52.087577 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-29 01:21:52.141659 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-29 01:21:52.141712 | orchestrator | + osism apply facts 2026-03-29 01:21:54.166000 | orchestrator | 2026-03-29 01:21:54 | INFO  | Task fdbc64a1-0790-451b-80d0-1005254b902d (facts) was prepared for execution. 2026-03-29 01:21:54.166140 | orchestrator | 2026-03-29 01:21:54 | INFO  | It takes a moment until task fdbc64a1-0790-451b-80d0-1005254b902d (facts) has been started and output is visible here. 2026-03-29 01:22:06.474845 | orchestrator | 2026-03-29 01:22:06.474908 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-29 01:22:06.474915 | orchestrator | 2026-03-29 01:22:06.474920 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-29 01:22:06.474925 | orchestrator | Sunday 29 March 2026 01:21:58 +0000 (0:00:00.241) 0:00:00.241 ********** 2026-03-29 01:22:06.474929 | orchestrator | ok: [testbed-manager] 2026-03-29 01:22:06.474934 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:06.474939 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:22:06.474943 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:22:06.474947 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:22:06.474951 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:22:06.474956 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:22:06.474960 | orchestrator | 2026-03-29 01:22:06.474964 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-29 01:22:06.474968 | orchestrator | Sunday 29 March 2026 01:21:59 +0000 (0:00:01.489) 0:00:01.731 ********** 2026-03-29 01:22:06.474972 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:22:06.474977 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:06.474981 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:22:06.474985 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:22:06.474990 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:22:06.474994 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:22:06.474998 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:22:06.475002 | orchestrator | 2026-03-29 01:22:06.475007 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-29 01:22:06.475011 | orchestrator | 2026-03-29 01:22:06.475015 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-29 01:22:06.475019 | orchestrator | Sunday 29 March 2026 01:22:01 +0000 (0:00:01.316) 0:00:03.047 ********** 2026-03-29 01:22:06.475023 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:06.475028 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:22:06.475032 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:22:06.475036 | orchestrator | ok: [testbed-manager] 2026-03-29 01:22:06.475040 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:22:06.475044 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:22:06.475048 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:22:06.475053 | orchestrator | 2026-03-29 01:22:06.475057 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-29 01:22:06.475076 | orchestrator | 2026-03-29 01:22:06.475081 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-29 01:22:06.475085 | orchestrator | Sunday 29 March 2026 01:22:05 +0000 (0:00:04.346) 0:00:07.393 ********** 2026-03-29 01:22:06.475089 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:22:06.475094 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:06.475098 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:22:06.475102 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:22:06.475106 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:22:06.475110 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:22:06.475114 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:22:06.475118 | orchestrator | 2026-03-29 01:22:06.475122 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:22:06.475134 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:06.475139 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:06.475143 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:06.475148 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:06.475152 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:06.475156 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:06.475160 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:06.475164 | orchestrator | 2026-03-29 01:22:06.475169 | orchestrator | 2026-03-29 01:22:06.475173 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:22:06.475177 | orchestrator | Sunday 29 March 2026 01:22:06 +0000 (0:00:00.572) 0:00:07.965 ********** 2026-03-29 01:22:06.475181 | orchestrator | =============================================================================== 2026-03-29 01:22:06.475185 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.35s 2026-03-29 01:22:06.475189 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.49s 2026-03-29 01:22:06.475193 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2026-03-29 01:22:06.475197 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-03-29 01:22:06.752808 | orchestrator | + osism validate ceph-mons 2026-03-29 01:22:38.371789 | orchestrator | 2026-03-29 01:22:38.371842 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-29 01:22:38.371849 | orchestrator | 2026-03-29 01:22:38.371854 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 01:22:38.371859 | orchestrator | Sunday 29 March 2026 01:22:23 +0000 (0:00:00.445) 0:00:00.445 ********** 2026-03-29 01:22:38.371864 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:22:38.371868 | orchestrator | 2026-03-29 01:22:38.371873 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 01:22:38.371885 | orchestrator | Sunday 29 March 2026 01:22:24 +0000 (0:00:00.815) 0:00:01.261 ********** 2026-03-29 01:22:38.371893 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:22:38.371898 | orchestrator | 2026-03-29 01:22:38.371902 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 01:22:38.371906 | orchestrator | Sunday 29 March 2026 01:22:25 +0000 (0:00:00.982) 0:00:02.243 ********** 2026-03-29 01:22:38.371922 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.371927 | orchestrator | 2026-03-29 01:22:38.371931 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-29 01:22:38.371936 | orchestrator | Sunday 29 March 2026 01:22:25 +0000 (0:00:00.126) 0:00:02.370 ********** 2026-03-29 01:22:38.371940 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.371944 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:22:38.371948 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:22:38.371953 | orchestrator | 2026-03-29 01:22:38.371957 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-29 01:22:38.371962 | orchestrator | Sunday 29 March 2026 01:22:25 +0000 (0:00:00.326) 0:00:02.697 ********** 2026-03-29 01:22:38.371966 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:22:38.371970 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.371975 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:22:38.371979 | orchestrator | 2026-03-29 01:22:38.371983 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-29 01:22:38.371988 | orchestrator | Sunday 29 March 2026 01:22:26 +0000 (0:00:01.024) 0:00:03.721 ********** 2026-03-29 01:22:38.371992 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.371996 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:22:38.372001 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:22:38.372005 | orchestrator | 2026-03-29 01:22:38.372010 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-29 01:22:38.372014 | orchestrator | Sunday 29 March 2026 01:22:26 +0000 (0:00:00.279) 0:00:04.000 ********** 2026-03-29 01:22:38.372018 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372023 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:22:38.372027 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:22:38.372031 | orchestrator | 2026-03-29 01:22:38.372036 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:22:38.372040 | orchestrator | Sunday 29 March 2026 01:22:27 +0000 (0:00:00.469) 0:00:04.470 ********** 2026-03-29 01:22:38.372045 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372049 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:22:38.372054 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:22:38.372058 | orchestrator | 2026-03-29 01:22:38.372062 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-29 01:22:38.372067 | orchestrator | Sunday 29 March 2026 01:22:27 +0000 (0:00:00.304) 0:00:04.775 ********** 2026-03-29 01:22:38.372071 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372075 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:22:38.372079 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:22:38.372083 | orchestrator | 2026-03-29 01:22:38.372088 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-29 01:22:38.372092 | orchestrator | Sunday 29 March 2026 01:22:27 +0000 (0:00:00.282) 0:00:05.058 ********** 2026-03-29 01:22:38.372097 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372100 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:22:38.372104 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:22:38.372108 | orchestrator | 2026-03-29 01:22:38.372111 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:22:38.372115 | orchestrator | Sunday 29 March 2026 01:22:28 +0000 (0:00:00.482) 0:00:05.540 ********** 2026-03-29 01:22:38.372119 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372123 | orchestrator | 2026-03-29 01:22:38.372126 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:22:38.372130 | orchestrator | Sunday 29 March 2026 01:22:28 +0000 (0:00:00.248) 0:00:05.788 ********** 2026-03-29 01:22:38.372134 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372137 | orchestrator | 2026-03-29 01:22:38.372141 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:22:38.372145 | orchestrator | Sunday 29 March 2026 01:22:28 +0000 (0:00:00.242) 0:00:06.031 ********** 2026-03-29 01:22:38.372162 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372172 | orchestrator | 2026-03-29 01:22:38.372176 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:22:38.372180 | orchestrator | Sunday 29 March 2026 01:22:29 +0000 (0:00:00.242) 0:00:06.273 ********** 2026-03-29 01:22:38.372184 | orchestrator | 2026-03-29 01:22:38.372188 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:22:38.372191 | orchestrator | Sunday 29 March 2026 01:22:29 +0000 (0:00:00.081) 0:00:06.355 ********** 2026-03-29 01:22:38.372195 | orchestrator | 2026-03-29 01:22:38.372199 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:22:38.372202 | orchestrator | Sunday 29 March 2026 01:22:29 +0000 (0:00:00.070) 0:00:06.426 ********** 2026-03-29 01:22:38.372206 | orchestrator | 2026-03-29 01:22:38.372210 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:22:38.372213 | orchestrator | Sunday 29 March 2026 01:22:29 +0000 (0:00:00.095) 0:00:06.521 ********** 2026-03-29 01:22:38.372217 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372221 | orchestrator | 2026-03-29 01:22:38.372225 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-29 01:22:38.372228 | orchestrator | Sunday 29 March 2026 01:22:29 +0000 (0:00:00.262) 0:00:06.784 ********** 2026-03-29 01:22:38.372232 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372236 | orchestrator | 2026-03-29 01:22:38.372247 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-29 01:22:38.372251 | orchestrator | Sunday 29 March 2026 01:22:29 +0000 (0:00:00.256) 0:00:07.040 ********** 2026-03-29 01:22:38.372255 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372259 | orchestrator | 2026-03-29 01:22:38.372262 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-29 01:22:38.372266 | orchestrator | Sunday 29 March 2026 01:22:29 +0000 (0:00:00.131) 0:00:07.172 ********** 2026-03-29 01:22:38.372270 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:22:38.372273 | orchestrator | 2026-03-29 01:22:38.372277 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-29 01:22:38.372281 | orchestrator | Sunday 29 March 2026 01:22:31 +0000 (0:00:01.463) 0:00:08.635 ********** 2026-03-29 01:22:38.372285 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372288 | orchestrator | 2026-03-29 01:22:38.372292 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-29 01:22:38.372296 | orchestrator | Sunday 29 March 2026 01:22:31 +0000 (0:00:00.515) 0:00:09.150 ********** 2026-03-29 01:22:38.372299 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372303 | orchestrator | 2026-03-29 01:22:38.372307 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-29 01:22:38.372310 | orchestrator | Sunday 29 March 2026 01:22:32 +0000 (0:00:00.125) 0:00:09.276 ********** 2026-03-29 01:22:38.372314 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372318 | orchestrator | 2026-03-29 01:22:38.372321 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-29 01:22:38.372325 | orchestrator | Sunday 29 March 2026 01:22:32 +0000 (0:00:00.323) 0:00:09.599 ********** 2026-03-29 01:22:38.372329 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372332 | orchestrator | 2026-03-29 01:22:38.372336 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-29 01:22:38.372340 | orchestrator | Sunday 29 March 2026 01:22:32 +0000 (0:00:00.302) 0:00:09.902 ********** 2026-03-29 01:22:38.372343 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372347 | orchestrator | 2026-03-29 01:22:38.372351 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-29 01:22:38.372357 | orchestrator | Sunday 29 March 2026 01:22:32 +0000 (0:00:00.124) 0:00:10.027 ********** 2026-03-29 01:22:38.372366 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372374 | orchestrator | 2026-03-29 01:22:38.372380 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-29 01:22:38.372386 | orchestrator | Sunday 29 March 2026 01:22:32 +0000 (0:00:00.128) 0:00:10.156 ********** 2026-03-29 01:22:38.372396 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372403 | orchestrator | 2026-03-29 01:22:38.372408 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-29 01:22:38.372414 | orchestrator | Sunday 29 March 2026 01:22:33 +0000 (0:00:00.131) 0:00:10.287 ********** 2026-03-29 01:22:38.372420 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:22:38.372426 | orchestrator | 2026-03-29 01:22:38.372432 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-29 01:22:38.372439 | orchestrator | Sunday 29 March 2026 01:22:34 +0000 (0:00:01.155) 0:00:11.442 ********** 2026-03-29 01:22:38.372445 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372451 | orchestrator | 2026-03-29 01:22:38.372458 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-29 01:22:38.372464 | orchestrator | Sunday 29 March 2026 01:22:34 +0000 (0:00:00.336) 0:00:11.779 ********** 2026-03-29 01:22:38.372471 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372477 | orchestrator | 2026-03-29 01:22:38.372483 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-29 01:22:38.372493 | orchestrator | Sunday 29 March 2026 01:22:34 +0000 (0:00:00.145) 0:00:11.924 ********** 2026-03-29 01:22:38.372499 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:22:38.372506 | orchestrator | 2026-03-29 01:22:38.372512 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-29 01:22:38.372518 | orchestrator | Sunday 29 March 2026 01:22:34 +0000 (0:00:00.145) 0:00:12.070 ********** 2026-03-29 01:22:38.372525 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372531 | orchestrator | 2026-03-29 01:22:38.372537 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-29 01:22:38.372543 | orchestrator | Sunday 29 March 2026 01:22:35 +0000 (0:00:00.140) 0:00:12.210 ********** 2026-03-29 01:22:38.372550 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372556 | orchestrator | 2026-03-29 01:22:38.372562 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 01:22:38.372568 | orchestrator | Sunday 29 March 2026 01:22:35 +0000 (0:00:00.341) 0:00:12.552 ********** 2026-03-29 01:22:38.372574 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:22:38.372580 | orchestrator | 2026-03-29 01:22:38.372587 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 01:22:38.372592 | orchestrator | Sunday 29 March 2026 01:22:35 +0000 (0:00:00.268) 0:00:12.820 ********** 2026-03-29 01:22:38.372598 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:22:38.372605 | orchestrator | 2026-03-29 01:22:38.372612 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:22:38.372618 | orchestrator | Sunday 29 March 2026 01:22:35 +0000 (0:00:00.259) 0:00:13.079 ********** 2026-03-29 01:22:38.372625 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:22:38.372631 | orchestrator | 2026-03-29 01:22:38.372637 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:22:38.372644 | orchestrator | Sunday 29 March 2026 01:22:37 +0000 (0:00:01.741) 0:00:14.821 ********** 2026-03-29 01:22:38.372653 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:22:38.372659 | orchestrator | 2026-03-29 01:22:38.372665 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:22:38.372671 | orchestrator | Sunday 29 March 2026 01:22:37 +0000 (0:00:00.278) 0:00:15.099 ********** 2026-03-29 01:22:38.372678 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:22:38.372683 | orchestrator | 2026-03-29 01:22:38.372707 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:22:41.012165 | orchestrator | Sunday 29 March 2026 01:22:38 +0000 (0:00:00.242) 0:00:15.342 ********** 2026-03-29 01:22:41.012265 | orchestrator | 2026-03-29 01:22:41.012289 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:22:41.012329 | orchestrator | Sunday 29 March 2026 01:22:38 +0000 (0:00:00.069) 0:00:15.412 ********** 2026-03-29 01:22:41.012337 | orchestrator | 2026-03-29 01:22:41.012344 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:22:41.012350 | orchestrator | Sunday 29 March 2026 01:22:38 +0000 (0:00:00.068) 0:00:15.480 ********** 2026-03-29 01:22:41.012356 | orchestrator | 2026-03-29 01:22:41.012363 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 01:22:41.012369 | orchestrator | Sunday 29 March 2026 01:22:38 +0000 (0:00:00.080) 0:00:15.561 ********** 2026-03-29 01:22:41.012377 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:22:41.012383 | orchestrator | 2026-03-29 01:22:41.012389 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:22:41.012396 | orchestrator | Sunday 29 March 2026 01:22:39 +0000 (0:00:01.472) 0:00:17.034 ********** 2026-03-29 01:22:41.012403 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-29 01:22:41.012410 | orchestrator |  "msg": [ 2026-03-29 01:22:41.012419 | orchestrator |  "Validator run completed.", 2026-03-29 01:22:41.012425 | orchestrator |  "You can find the report file here:", 2026-03-29 01:22:41.012446 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-29T01:22:23+00:00-report.json", 2026-03-29 01:22:41.012454 | orchestrator |  "on the following host:", 2026-03-29 01:22:41.012460 | orchestrator |  "testbed-manager" 2026-03-29 01:22:41.012466 | orchestrator |  ] 2026-03-29 01:22:41.012472 | orchestrator | } 2026-03-29 01:22:41.012477 | orchestrator | 2026-03-29 01:22:41.012483 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:22:41.012490 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-29 01:22:41.012497 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:41.012504 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:22:41.012510 | orchestrator | 2026-03-29 01:22:41.012516 | orchestrator | 2026-03-29 01:22:41.012528 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:22:41.012535 | orchestrator | Sunday 29 March 2026 01:22:40 +0000 (0:00:00.871) 0:00:17.905 ********** 2026-03-29 01:22:41.012540 | orchestrator | =============================================================================== 2026-03-29 01:22:41.012546 | orchestrator | Aggregate test results step one ----------------------------------------- 1.74s 2026-03-29 01:22:41.012552 | orchestrator | Write report file ------------------------------------------------------- 1.47s 2026-03-29 01:22:41.012558 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.46s 2026-03-29 01:22:41.012563 | orchestrator | Gather status data ------------------------------------------------------ 1.16s 2026-03-29 01:22:41.012569 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2026-03-29 01:22:41.012576 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2026-03-29 01:22:41.012582 | orchestrator | Print report file information ------------------------------------------- 0.87s 2026-03-29 01:22:41.012588 | orchestrator | Get timestamp for report file ------------------------------------------- 0.82s 2026-03-29 01:22:41.012594 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-03-29 01:22:41.012600 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.48s 2026-03-29 01:22:41.012607 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-03-29 01:22:41.012613 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-03-29 01:22:41.012618 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2026-03-29 01:22:41.012634 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-03-29 01:22:41.012642 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-03-29 01:22:41.012652 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-03-29 01:22:41.012658 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2026-03-29 01:22:41.012664 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.28s 2026-03-29 01:22:41.012669 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-03-29 01:22:41.012675 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-03-29 01:22:41.300597 | orchestrator | + osism validate ceph-mgrs 2026-03-29 01:23:12.679755 | orchestrator | 2026-03-29 01:23:12.679983 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-29 01:23:12.679996 | orchestrator | 2026-03-29 01:23:12.680007 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 01:23:12.680014 | orchestrator | Sunday 29 March 2026 01:22:57 +0000 (0:00:00.446) 0:00:00.446 ********** 2026-03-29 01:23:12.680021 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:12.680027 | orchestrator | 2026-03-29 01:23:12.680033 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 01:23:12.680040 | orchestrator | Sunday 29 March 2026 01:22:58 +0000 (0:00:00.831) 0:00:01.277 ********** 2026-03-29 01:23:12.680046 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:12.680051 | orchestrator | 2026-03-29 01:23:12.680057 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 01:23:12.680064 | orchestrator | Sunday 29 March 2026 01:22:59 +0000 (0:00:01.004) 0:00:02.282 ********** 2026-03-29 01:23:12.680069 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680084 | orchestrator | 2026-03-29 01:23:12.680096 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-29 01:23:12.680103 | orchestrator | Sunday 29 March 2026 01:22:59 +0000 (0:00:00.129) 0:00:02.412 ********** 2026-03-29 01:23:12.680109 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680115 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:23:12.680121 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:23:12.680127 | orchestrator | 2026-03-29 01:23:12.680133 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-29 01:23:12.680139 | orchestrator | Sunday 29 March 2026 01:23:00 +0000 (0:00:00.324) 0:00:02.737 ********** 2026-03-29 01:23:12.680145 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:23:12.680151 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:23:12.680156 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680162 | orchestrator | 2026-03-29 01:23:12.680169 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-29 01:23:12.680175 | orchestrator | Sunday 29 March 2026 01:23:01 +0000 (0:00:01.036) 0:00:03.774 ********** 2026-03-29 01:23:12.680180 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680186 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:23:12.680192 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:23:12.680198 | orchestrator | 2026-03-29 01:23:12.680203 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-29 01:23:12.680209 | orchestrator | Sunday 29 March 2026 01:23:01 +0000 (0:00:00.286) 0:00:04.060 ********** 2026-03-29 01:23:12.680215 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680220 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:23:12.680226 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:23:12.680232 | orchestrator | 2026-03-29 01:23:12.680237 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:23:12.680244 | orchestrator | Sunday 29 March 2026 01:23:02 +0000 (0:00:00.531) 0:00:04.591 ********** 2026-03-29 01:23:12.680250 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680257 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:23:12.680286 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:23:12.680290 | orchestrator | 2026-03-29 01:23:12.680294 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-29 01:23:12.680298 | orchestrator | Sunday 29 March 2026 01:23:02 +0000 (0:00:00.332) 0:00:04.924 ********** 2026-03-29 01:23:12.680302 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680307 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:23:12.680331 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:23:12.680338 | orchestrator | 2026-03-29 01:23:12.680345 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-29 01:23:12.680350 | orchestrator | Sunday 29 March 2026 01:23:02 +0000 (0:00:00.327) 0:00:05.252 ********** 2026-03-29 01:23:12.680354 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680359 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:23:12.680363 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:23:12.680368 | orchestrator | 2026-03-29 01:23:12.680372 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:23:12.680376 | orchestrator | Sunday 29 March 2026 01:23:03 +0000 (0:00:00.537) 0:00:05.789 ********** 2026-03-29 01:23:12.680381 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680385 | orchestrator | 2026-03-29 01:23:12.680390 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:23:12.680397 | orchestrator | Sunday 29 March 2026 01:23:03 +0000 (0:00:00.316) 0:00:06.105 ********** 2026-03-29 01:23:12.680402 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680406 | orchestrator | 2026-03-29 01:23:12.680411 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:23:12.680415 | orchestrator | Sunday 29 March 2026 01:23:03 +0000 (0:00:00.276) 0:00:06.382 ********** 2026-03-29 01:23:12.680420 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680424 | orchestrator | 2026-03-29 01:23:12.680428 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:12.680432 | orchestrator | Sunday 29 March 2026 01:23:04 +0000 (0:00:00.238) 0:00:06.621 ********** 2026-03-29 01:23:12.680437 | orchestrator | 2026-03-29 01:23:12.680441 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:12.680445 | orchestrator | Sunday 29 March 2026 01:23:04 +0000 (0:00:00.069) 0:00:06.690 ********** 2026-03-29 01:23:12.680449 | orchestrator | 2026-03-29 01:23:12.680454 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:12.680458 | orchestrator | Sunday 29 March 2026 01:23:04 +0000 (0:00:00.070) 0:00:06.761 ********** 2026-03-29 01:23:12.680462 | orchestrator | 2026-03-29 01:23:12.680466 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:23:12.680471 | orchestrator | Sunday 29 March 2026 01:23:04 +0000 (0:00:00.076) 0:00:06.838 ********** 2026-03-29 01:23:12.680475 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680480 | orchestrator | 2026-03-29 01:23:12.680484 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-29 01:23:12.680489 | orchestrator | Sunday 29 March 2026 01:23:04 +0000 (0:00:00.276) 0:00:07.115 ********** 2026-03-29 01:23:12.680493 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680497 | orchestrator | 2026-03-29 01:23:12.680517 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-29 01:23:12.680521 | orchestrator | Sunday 29 March 2026 01:23:04 +0000 (0:00:00.233) 0:00:07.349 ********** 2026-03-29 01:23:12.680525 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680528 | orchestrator | 2026-03-29 01:23:12.680532 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-29 01:23:12.680536 | orchestrator | Sunday 29 March 2026 01:23:05 +0000 (0:00:00.125) 0:00:07.474 ********** 2026-03-29 01:23:12.680539 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:23:12.680543 | orchestrator | 2026-03-29 01:23:12.680547 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-29 01:23:12.680557 | orchestrator | Sunday 29 March 2026 01:23:07 +0000 (0:00:02.213) 0:00:09.688 ********** 2026-03-29 01:23:12.680563 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680571 | orchestrator | 2026-03-29 01:23:12.680579 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-29 01:23:12.680585 | orchestrator | Sunday 29 March 2026 01:23:07 +0000 (0:00:00.441) 0:00:10.129 ********** 2026-03-29 01:23:12.680590 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680595 | orchestrator | 2026-03-29 01:23:12.680602 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-29 01:23:12.680607 | orchestrator | Sunday 29 March 2026 01:23:07 +0000 (0:00:00.319) 0:00:10.449 ********** 2026-03-29 01:23:12.680613 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680619 | orchestrator | 2026-03-29 01:23:12.680625 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-29 01:23:12.680631 | orchestrator | Sunday 29 March 2026 01:23:08 +0000 (0:00:00.140) 0:00:10.590 ********** 2026-03-29 01:23:12.680636 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:23:12.680642 | orchestrator | 2026-03-29 01:23:12.680647 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 01:23:12.680653 | orchestrator | Sunday 29 March 2026 01:23:08 +0000 (0:00:00.150) 0:00:10.740 ********** 2026-03-29 01:23:12.680660 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:12.680666 | orchestrator | 2026-03-29 01:23:12.680672 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 01:23:12.680678 | orchestrator | Sunday 29 March 2026 01:23:08 +0000 (0:00:00.283) 0:00:11.024 ********** 2026-03-29 01:23:12.680684 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:23:12.680690 | orchestrator | 2026-03-29 01:23:12.680697 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:23:12.680701 | orchestrator | Sunday 29 March 2026 01:23:08 +0000 (0:00:00.252) 0:00:11.276 ********** 2026-03-29 01:23:12.680705 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:12.680708 | orchestrator | 2026-03-29 01:23:12.680712 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:23:12.680716 | orchestrator | Sunday 29 March 2026 01:23:10 +0000 (0:00:01.248) 0:00:12.525 ********** 2026-03-29 01:23:12.680719 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:12.680723 | orchestrator | 2026-03-29 01:23:12.680727 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:23:12.680730 | orchestrator | Sunday 29 March 2026 01:23:10 +0000 (0:00:00.271) 0:00:12.797 ********** 2026-03-29 01:23:12.680734 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:12.680738 | orchestrator | 2026-03-29 01:23:12.680742 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:12.680745 | orchestrator | Sunday 29 March 2026 01:23:10 +0000 (0:00:00.254) 0:00:13.051 ********** 2026-03-29 01:23:12.680749 | orchestrator | 2026-03-29 01:23:12.680753 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:12.680756 | orchestrator | Sunday 29 March 2026 01:23:10 +0000 (0:00:00.071) 0:00:13.123 ********** 2026-03-29 01:23:12.680776 | orchestrator | 2026-03-29 01:23:12.680781 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:12.680784 | orchestrator | Sunday 29 March 2026 01:23:10 +0000 (0:00:00.070) 0:00:13.193 ********** 2026-03-29 01:23:12.680788 | orchestrator | 2026-03-29 01:23:12.680792 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 01:23:12.680800 | orchestrator | Sunday 29 March 2026 01:23:10 +0000 (0:00:00.252) 0:00:13.445 ********** 2026-03-29 01:23:12.680806 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:12.680812 | orchestrator | 2026-03-29 01:23:12.680818 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:23:12.680824 | orchestrator | Sunday 29 March 2026 01:23:12 +0000 (0:00:01.314) 0:00:14.760 ********** 2026-03-29 01:23:12.680838 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-29 01:23:12.680848 | orchestrator |  "msg": [ 2026-03-29 01:23:12.680854 | orchestrator |  "Validator run completed.", 2026-03-29 01:23:12.680861 | orchestrator |  "You can find the report file here:", 2026-03-29 01:23:12.680867 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-29T01:22:58+00:00-report.json", 2026-03-29 01:23:12.680874 | orchestrator |  "on the following host:", 2026-03-29 01:23:12.680880 | orchestrator |  "testbed-manager" 2026-03-29 01:23:12.680886 | orchestrator |  ] 2026-03-29 01:23:12.680892 | orchestrator | } 2026-03-29 01:23:12.680898 | orchestrator | 2026-03-29 01:23:12.680904 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:23:12.680911 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:23:12.680919 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:23:12.680933 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:23:12.895494 | orchestrator | 2026-03-29 01:23:12.895565 | orchestrator | 2026-03-29 01:23:12.895571 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:23:12.895577 | orchestrator | Sunday 29 March 2026 01:23:12 +0000 (0:00:00.367) 0:00:15.128 ********** 2026-03-29 01:23:12.895582 | orchestrator | =============================================================================== 2026-03-29 01:23:12.895586 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.21s 2026-03-29 01:23:12.895590 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-03-29 01:23:12.895594 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2026-03-29 01:23:12.895598 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2026-03-29 01:23:12.895602 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-03-29 01:23:12.895606 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-03-29 01:23:12.895610 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.54s 2026-03-29 01:23:12.895613 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2026-03-29 01:23:12.895617 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.44s 2026-03-29 01:23:12.895621 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2026-03-29 01:23:12.895625 | orchestrator | Print report file information ------------------------------------------- 0.37s 2026-03-29 01:23:12.895628 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-03-29 01:23:12.895632 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2026-03-29 01:23:12.895636 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-03-29 01:23:12.895639 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-03-29 01:23:12.895643 | orchestrator | Aggregate test results step one ----------------------------------------- 0.32s 2026-03-29 01:23:12.895647 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-29 01:23:12.895651 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-03-29 01:23:12.895654 | orchestrator | Print report file information ------------------------------------------- 0.28s 2026-03-29 01:23:12.895658 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-03-29 01:23:13.104146 | orchestrator | + osism validate ceph-osds 2026-03-29 01:23:34.381197 | orchestrator | 2026-03-29 01:23:34.381292 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-29 01:23:34.381324 | orchestrator | 2026-03-29 01:23:34.381334 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-29 01:23:34.381343 | orchestrator | Sunday 29 March 2026 01:23:29 +0000 (0:00:00.452) 0:00:00.452 ********** 2026-03-29 01:23:34.381350 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:34.381357 | orchestrator | 2026-03-29 01:23:34.381363 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-29 01:23:34.381370 | orchestrator | Sunday 29 March 2026 01:23:30 +0000 (0:00:00.937) 0:00:01.390 ********** 2026-03-29 01:23:34.381375 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:34.381378 | orchestrator | 2026-03-29 01:23:34.381382 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-29 01:23:34.381386 | orchestrator | Sunday 29 March 2026 01:23:31 +0000 (0:00:00.512) 0:00:01.902 ********** 2026-03-29 01:23:34.381392 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:34.381400 | orchestrator | 2026-03-29 01:23:34.381408 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-29 01:23:34.381414 | orchestrator | Sunday 29 March 2026 01:23:31 +0000 (0:00:00.697) 0:00:02.600 ********** 2026-03-29 01:23:34.381420 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:34.381427 | orchestrator | 2026-03-29 01:23:34.381433 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-29 01:23:34.381440 | orchestrator | Sunday 29 March 2026 01:23:32 +0000 (0:00:00.129) 0:00:02.730 ********** 2026-03-29 01:23:34.381447 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:34.381453 | orchestrator | 2026-03-29 01:23:34.381459 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-29 01:23:34.381466 | orchestrator | Sunday 29 March 2026 01:23:32 +0000 (0:00:00.150) 0:00:02.880 ********** 2026-03-29 01:23:34.381470 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:34.381473 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:34.381477 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:34.381481 | orchestrator | 2026-03-29 01:23:34.381484 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-29 01:23:34.381488 | orchestrator | Sunday 29 March 2026 01:23:32 +0000 (0:00:00.310) 0:00:03.191 ********** 2026-03-29 01:23:34.381492 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:34.381495 | orchestrator | 2026-03-29 01:23:34.381499 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-29 01:23:34.381503 | orchestrator | Sunday 29 March 2026 01:23:32 +0000 (0:00:00.152) 0:00:03.344 ********** 2026-03-29 01:23:34.381506 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:34.381510 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:34.381514 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:34.381517 | orchestrator | 2026-03-29 01:23:34.381521 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-29 01:23:34.381525 | orchestrator | Sunday 29 March 2026 01:23:33 +0000 (0:00:00.306) 0:00:03.650 ********** 2026-03-29 01:23:34.381529 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:34.381532 | orchestrator | 2026-03-29 01:23:34.381536 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:23:34.381540 | orchestrator | Sunday 29 March 2026 01:23:33 +0000 (0:00:00.801) 0:00:04.451 ********** 2026-03-29 01:23:34.381543 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:34.381547 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:34.381551 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:34.381554 | orchestrator | 2026-03-29 01:23:34.381558 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-29 01:23:34.381562 | orchestrator | Sunday 29 March 2026 01:23:34 +0000 (0:00:00.301) 0:00:04.753 ********** 2026-03-29 01:23:34.381567 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cae2ba122c3d7336ceb5af8019ef0fb3fe8a15d9160a581339fc0a37982eb4b4', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-29 01:23:34.381578 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e220322874333052fcc2540be1b9b5ae84551c6c7a6b02e1dff6b2c8f3ee158', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:23:34.381582 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a9b0dcacdb8869fd4ab3e91e9745ad67ea90c9b418b10cf3d3133b7a99c7d47a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:23:34.381587 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a26a5ec8e34270df3c44acc232a269c658c3a80c7ec0b0a254e184eda54c43e0', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:23:34.381592 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8d6029b58c05b30ccd14a70743316028b8977cc6d32bfe66b3df582b31ab109e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:23:34.381608 | orchestrator | skipping: [testbed-node-3] => (item={'id': '80ed784ad6f9bf718cb2cf94b4f9ca59636a12783ba1a444502cd26b8977419f', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-29 01:23:34.381612 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c83a3c4a47227e73cafacc6ace6e3a61922463651a77b9f352955cee1f29d342', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-03-29 01:23:34.381627 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78a190437fe79121f2623cb4f2ee7adb149990d9a9e091e02e815682384b9673', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-29 01:23:34.381632 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bc06ad3a0aff54f96c4bda4dbe87f9674b13e83c23709f2475f2f5ec9bc7ac75', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2026-03-29 01:23:34.381639 | orchestrator | skipping: [testbed-node-3] => (item={'id': '41f5f0cf2a2af44b55a384c9269e7334c7b6a9b650b9b10307d4569e5678d1a7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-29 01:23:34.381645 | orchestrator | ok: [testbed-node-3] => (item={'id': '910fe1c1c4c27d5f1c02d286f504877f60771dd5b942051416b3f9aedbfc36c2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-29 01:23:34.381649 | orchestrator | ok: [testbed-node-3] => (item={'id': '500f8d6c533a739b5818f48934e4bb5f70aa4e85b45c0245718c4695bd67cd76', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-29 01:23:34.381653 | orchestrator | skipping: [testbed-node-3] => (item={'id': '420c535349df009403f44793c389553789f31e688b43854627ac1bf5b9c4e678', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-29 01:23:34.381657 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ceed09af5b40214f7374b4617d466f68c1d167fe17c9c9ad73feea0ad607d1fa', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-29 01:23:34.381665 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7dcb5f201ad6ca13a97d9d1320a60d24d44c7e5595ee4d02e3b572dfe5912e2', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-29 01:23:34.381669 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fd2de6a784f6dbb5d99b862b3568e33ff70db643e4fd49fe744985c3affa81bd', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-29 01:23:34.381672 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aec913da48490ab7c909d4a2f3b091091b2758e50107bfe544a376febf351296', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-29 01:23:34.381676 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0fcb2f5a1a7539b9f1bc27092bdc08cfb275411a1b7c31144659e5b768ddd0b3', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-29 01:23:34.381680 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0d067578f423222de9ca351eefef5c585d232f4924a2f84bf6fe52e5eb4ee715', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-29 01:23:34.381684 | orchestrator | skipping: [testbed-node-4] => (item={'id': '530985bb4b26f86fc4567f525ed8c8722afe73343e51c436d37254c6c2fe752a', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:23:34.381690 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fffe3e4dc3fa370a14705a94bb479d7798165f8e9280ec9ed82ae90c0901b46e', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:23:34.695632 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2117eba860ab372fc8a48bd2f84123d8216b0c22577aeb2a8fc27c9d972879fe', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:23:34.695695 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f483e2ea9a0d3200447fd570f7ea96e956ec525759d5022c0432dfc868016fd7', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:23:34.695716 | orchestrator | skipping: [testbed-node-4] => (item={'id': '21c434786afd76a2b7b558102096d65bf16683786a7ce227909c2996007059b4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-29 01:23:34.695723 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7f5f05c1a54e5448928f3990d54cae81203f2941b71c26979c5f8b883d38737c', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-03-29 01:23:34.695729 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c4a650856aba2b508ea38908eac6fd1662d71c7bdc1a32df7e5e9106a301fcbd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-29 01:23:34.695736 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e6a78eb5a027fa20f949d6c45812b5dd422f7f26f0cd2e9e940da01b0b557ed', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2026-03-29 01:23:34.695743 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3e3fa2e13f976840964f45f8b50326c5f67c40e2cdd290487691092f75e7b2a0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-29 01:23:34.695764 | orchestrator | ok: [testbed-node-4] => (item={'id': '7631587f6e7bc066aa70aaa04fa9b49ad055dd51d9c9ac00ac6aaa5fb7fb01e4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-29 01:23:34.695772 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b013e42e401e37f9a5207614dbeeee4d238be3ea507dea0253acadef4093667d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-29 01:23:34.695779 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5fa882063347abec322bd249c4bf607c7fd163abadefbb2a8305858068529207', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-29 01:23:34.695786 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4e52f251c5f051a60eddb17f99365fdccda1b793db3688ba2f0a84c1ac114935', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-29 01:23:34.695792 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7debeda1d26a1c6cae78107606dedbbb4b6460ef8a568414d9bcb7b1fc47eb99', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-29 01:23:34.695892 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e1a2952d0b32d7d4a9b886acd84506dfab52abc3607a66b954b64275071950b2', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-29 01:23:34.695901 | orchestrator | skipping: [testbed-node-4] => (item={'id': '13ff6d437915f557b6f8d93904d8becfdf0f79ee81cdd67b67d79d09ada4e66e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-29 01:23:34.695922 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f8b32d0b9898d3c8ee81a803f29ba1d67ef76ac55a622939948192909c729ca3', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-29 01:23:34.695929 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7f41e444560e053b3e97ad1988d8500174f1650dc0ff0b8c748f35e447a5f1ae', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2026-03-29 01:23:34.695936 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3e91ca5b512a36982fc8764e100c676a3906673f11664621b73983bac1a25750', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:23:34.695942 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5c22cbf105dbb089c9850b6076115fc75bd651dd90c2de63c13022839fa35e3b', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-03-29 01:23:34.695954 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0d0fec13b200259bd6a7bee189c23559417df570764147afbffd14483bc3a157', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:23:34.695960 | orchestrator | skipping: [testbed-node-5] => (item={'id': '518b54cd485b0f3523150f021322b48b757fa5ad5ea864d8c8336d8bad3bb96b', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-03-29 01:23:34.695980 | orchestrator | skipping: [testbed-node-5] => (item={'id': '489cf7d81737718d8f870351992a9a975199a62116ab6fb2e3741432a991e55b', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2026-03-29 01:23:34.695988 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25e3f9f7155f2fc59c456f313015f85b990900b80d91d956d10174230a922cfb', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 15 minutes (healthy)'})  2026-03-29 01:23:34.695995 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f1327cbf0c09c77ed1d445953726b3951dac99b773d50efc3921f05809b4a853', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2026-03-29 01:23:34.696001 | orchestrator | skipping: [testbed-node-5] => (item={'id': '44a343e2cfe954143ef3e58112e5ec8a6fa53452ef9d4b0cab287c1999c6af1e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2026-03-29 01:23:34.696007 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ddd20eda24c2a33960d7ce6fb6fcc2f55f18a126d3497a4c5f8dc3b7cabe0c2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2026-03-29 01:23:34.696014 | orchestrator | ok: [testbed-node-5] => (item={'id': '0cfcba927e92e5991928443bd414ac7395904fce1f99fa73756a6982a889ed98', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-29 01:23:34.696021 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a5a47d9f0065e28a0bc1b86ed90bc71c546a1780c78cb6147d89ebe8a05dc84a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2026-03-29 01:23:34.696027 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e130d5ba175a6848e4e8ccdff3d24ed0d17701874a220e29c9e385fb2faabacf', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 29 minutes'})  2026-03-29 01:23:34.696034 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e53b3e9326717174abf16f7cda03d9bf0a98ecd96ea0bdebb1139e91c56a3225', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-29 01:23:34.696047 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd4847386cac14aab1ec66ac0619f7d4dca0d7db792cd0d774acabc6d84ab66f8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2026-03-29 01:23:46.550872 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de25bd257c798ab3edb980c463e68f833732d86f56b3c76777f2f563a109282b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-29 01:23:46.550930 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b3237c5948e76d76550081cd53605c8316c215cb38a1abd5c56f45da8b10b368', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2026-03-29 01:23:46.550938 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5fac060bf65776778a9f466f37b71459ffca2e9173a51188f4841b4a38177026', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2026-03-29 01:23:46.550943 | orchestrator | 2026-03-29 01:23:46.550948 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-29 01:23:46.550966 | orchestrator | Sunday 29 March 2026 01:23:34 +0000 (0:00:00.548) 0:00:05.301 ********** 2026-03-29 01:23:46.550975 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.550985 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.550991 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.550997 | orchestrator | 2026-03-29 01:23:46.551003 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-29 01:23:46.551009 | orchestrator | Sunday 29 March 2026 01:23:35 +0000 (0:00:00.329) 0:00:05.631 ********** 2026-03-29 01:23:46.551015 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551021 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:46.551027 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:46.551033 | orchestrator | 2026-03-29 01:23:46.551040 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-29 01:23:46.551046 | orchestrator | Sunday 29 March 2026 01:23:35 +0000 (0:00:00.497) 0:00:06.129 ********** 2026-03-29 01:23:46.551052 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551058 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551065 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551071 | orchestrator | 2026-03-29 01:23:46.551077 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:23:46.551083 | orchestrator | Sunday 29 March 2026 01:23:35 +0000 (0:00:00.324) 0:00:06.453 ********** 2026-03-29 01:23:46.551090 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551096 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551103 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551109 | orchestrator | 2026-03-29 01:23:46.551114 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-29 01:23:46.551118 | orchestrator | Sunday 29 March 2026 01:23:36 +0000 (0:00:00.298) 0:00:06.751 ********** 2026-03-29 01:23:46.551122 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-29 01:23:46.551148 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-29 01:23:46.551155 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551162 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-29 01:23:46.551168 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-29 01:23:46.551174 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:46.551181 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-29 01:23:46.551187 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-29 01:23:46.551194 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:46.551201 | orchestrator | 2026-03-29 01:23:46.551207 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-29 01:23:46.551214 | orchestrator | Sunday 29 March 2026 01:23:36 +0000 (0:00:00.322) 0:00:07.074 ********** 2026-03-29 01:23:46.551218 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551222 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551225 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551229 | orchestrator | 2026-03-29 01:23:46.551233 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-29 01:23:46.551237 | orchestrator | Sunday 29 March 2026 01:23:36 +0000 (0:00:00.472) 0:00:07.546 ********** 2026-03-29 01:23:46.551241 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551245 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:46.551248 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:46.551252 | orchestrator | 2026-03-29 01:23:46.551256 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-29 01:23:46.551260 | orchestrator | Sunday 29 March 2026 01:23:37 +0000 (0:00:00.295) 0:00:07.842 ********** 2026-03-29 01:23:46.551263 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551272 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:46.551276 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:46.551280 | orchestrator | 2026-03-29 01:23:46.551283 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-29 01:23:46.551287 | orchestrator | Sunday 29 March 2026 01:23:37 +0000 (0:00:00.294) 0:00:08.136 ********** 2026-03-29 01:23:46.551291 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551295 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551298 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551302 | orchestrator | 2026-03-29 01:23:46.551306 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:23:46.551309 | orchestrator | Sunday 29 March 2026 01:23:37 +0000 (0:00:00.301) 0:00:08.438 ********** 2026-03-29 01:23:46.551313 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551317 | orchestrator | 2026-03-29 01:23:46.551330 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:23:46.551334 | orchestrator | Sunday 29 March 2026 01:23:38 +0000 (0:00:00.673) 0:00:09.112 ********** 2026-03-29 01:23:46.551338 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551342 | orchestrator | 2026-03-29 01:23:46.551346 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:23:46.551349 | orchestrator | Sunday 29 March 2026 01:23:38 +0000 (0:00:00.256) 0:00:09.368 ********** 2026-03-29 01:23:46.551353 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551357 | orchestrator | 2026-03-29 01:23:46.551360 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:46.551364 | orchestrator | Sunday 29 March 2026 01:23:39 +0000 (0:00:00.250) 0:00:09.619 ********** 2026-03-29 01:23:46.551368 | orchestrator | 2026-03-29 01:23:46.551371 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:46.551375 | orchestrator | Sunday 29 March 2026 01:23:39 +0000 (0:00:00.066) 0:00:09.685 ********** 2026-03-29 01:23:46.551379 | orchestrator | 2026-03-29 01:23:46.551382 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:46.551388 | orchestrator | Sunday 29 March 2026 01:23:39 +0000 (0:00:00.066) 0:00:09.751 ********** 2026-03-29 01:23:46.551392 | orchestrator | 2026-03-29 01:23:46.551396 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:23:46.551400 | orchestrator | Sunday 29 March 2026 01:23:39 +0000 (0:00:00.085) 0:00:09.836 ********** 2026-03-29 01:23:46.551403 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551407 | orchestrator | 2026-03-29 01:23:46.551411 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-29 01:23:46.551414 | orchestrator | Sunday 29 March 2026 01:23:39 +0000 (0:00:00.259) 0:00:10.096 ********** 2026-03-29 01:23:46.551418 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551422 | orchestrator | 2026-03-29 01:23:46.551426 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:23:46.551430 | orchestrator | Sunday 29 March 2026 01:23:39 +0000 (0:00:00.257) 0:00:10.354 ********** 2026-03-29 01:23:46.551435 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551439 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551443 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551447 | orchestrator | 2026-03-29 01:23:46.551452 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-29 01:23:46.551456 | orchestrator | Sunday 29 March 2026 01:23:40 +0000 (0:00:00.291) 0:00:10.645 ********** 2026-03-29 01:23:46.551460 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551464 | orchestrator | 2026-03-29 01:23:46.551469 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-29 01:23:46.551473 | orchestrator | Sunday 29 March 2026 01:23:40 +0000 (0:00:00.696) 0:00:11.342 ********** 2026-03-29 01:23:46.551477 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-29 01:23:46.551482 | orchestrator | 2026-03-29 01:23:46.551486 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-29 01:23:46.551493 | orchestrator | Sunday 29 March 2026 01:23:42 +0000 (0:00:01.359) 0:00:12.701 ********** 2026-03-29 01:23:46.551497 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551502 | orchestrator | 2026-03-29 01:23:46.551506 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-29 01:23:46.551510 | orchestrator | Sunday 29 March 2026 01:23:42 +0000 (0:00:00.132) 0:00:12.834 ********** 2026-03-29 01:23:46.551515 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551519 | orchestrator | 2026-03-29 01:23:46.551523 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-29 01:23:46.551528 | orchestrator | Sunday 29 March 2026 01:23:42 +0000 (0:00:00.327) 0:00:13.162 ********** 2026-03-29 01:23:46.551532 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551536 | orchestrator | 2026-03-29 01:23:46.551541 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-29 01:23:46.551545 | orchestrator | Sunday 29 March 2026 01:23:42 +0000 (0:00:00.150) 0:00:13.313 ********** 2026-03-29 01:23:46.551550 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551554 | orchestrator | 2026-03-29 01:23:46.551558 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:23:46.551563 | orchestrator | Sunday 29 March 2026 01:23:42 +0000 (0:00:00.140) 0:00:13.453 ********** 2026-03-29 01:23:46.551567 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551571 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551575 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551580 | orchestrator | 2026-03-29 01:23:46.551584 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-29 01:23:46.551588 | orchestrator | Sunday 29 March 2026 01:23:43 +0000 (0:00:00.292) 0:00:13.746 ********** 2026-03-29 01:23:46.551593 | orchestrator | changed: [testbed-node-3] 2026-03-29 01:23:46.551597 | orchestrator | changed: [testbed-node-4] 2026-03-29 01:23:46.551601 | orchestrator | changed: [testbed-node-5] 2026-03-29 01:23:46.551606 | orchestrator | 2026-03-29 01:23:46.551610 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-29 01:23:46.551614 | orchestrator | Sunday 29 March 2026 01:23:45 +0000 (0:00:02.244) 0:00:15.991 ********** 2026-03-29 01:23:46.551619 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551623 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551627 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551631 | orchestrator | 2026-03-29 01:23:46.551636 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-29 01:23:46.551640 | orchestrator | Sunday 29 March 2026 01:23:45 +0000 (0:00:00.332) 0:00:16.324 ********** 2026-03-29 01:23:46.551644 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:46.551649 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:46.551653 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:46.551657 | orchestrator | 2026-03-29 01:23:46.551662 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-29 01:23:46.551666 | orchestrator | Sunday 29 March 2026 01:23:46 +0000 (0:00:00.519) 0:00:16.843 ********** 2026-03-29 01:23:46.551670 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:46.551675 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:46.551679 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:46.551683 | orchestrator | 2026-03-29 01:23:46.551691 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-29 01:23:55.640794 | orchestrator | Sunday 29 March 2026 01:23:46 +0000 (0:00:00.313) 0:00:17.156 ********** 2026-03-29 01:23:55.640869 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:55.640879 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:55.640886 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:55.640892 | orchestrator | 2026-03-29 01:23:55.640898 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-29 01:23:55.640905 | orchestrator | Sunday 29 March 2026 01:23:47 +0000 (0:00:00.564) 0:00:17.721 ********** 2026-03-29 01:23:55.640911 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:55.640933 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:55.640940 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:55.640946 | orchestrator | 2026-03-29 01:23:55.640952 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-29 01:23:55.640959 | orchestrator | Sunday 29 March 2026 01:23:47 +0000 (0:00:00.307) 0:00:18.028 ********** 2026-03-29 01:23:55.640964 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:55.640970 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:55.640976 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:55.640982 | orchestrator | 2026-03-29 01:23:55.640997 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-29 01:23:55.641004 | orchestrator | Sunday 29 March 2026 01:23:47 +0000 (0:00:00.301) 0:00:18.329 ********** 2026-03-29 01:23:55.641010 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:55.641016 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:55.641022 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:55.641028 | orchestrator | 2026-03-29 01:23:55.641034 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-29 01:23:55.641040 | orchestrator | Sunday 29 March 2026 01:23:48 +0000 (0:00:00.496) 0:00:18.826 ********** 2026-03-29 01:23:55.641046 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:55.641052 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:55.641058 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:55.641064 | orchestrator | 2026-03-29 01:23:55.641070 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-29 01:23:55.641076 | orchestrator | Sunday 29 March 2026 01:23:48 +0000 (0:00:00.766) 0:00:19.592 ********** 2026-03-29 01:23:55.641082 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:55.641088 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:55.641093 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:55.641099 | orchestrator | 2026-03-29 01:23:55.641105 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-29 01:23:55.641111 | orchestrator | Sunday 29 March 2026 01:23:49 +0000 (0:00:00.322) 0:00:19.915 ********** 2026-03-29 01:23:55.641117 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:55.641123 | orchestrator | skipping: [testbed-node-4] 2026-03-29 01:23:55.641129 | orchestrator | skipping: [testbed-node-5] 2026-03-29 01:23:55.641136 | orchestrator | 2026-03-29 01:23:55.641143 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-29 01:23:55.641150 | orchestrator | Sunday 29 March 2026 01:23:49 +0000 (0:00:00.310) 0:00:20.226 ********** 2026-03-29 01:23:55.641156 | orchestrator | ok: [testbed-node-3] 2026-03-29 01:23:55.641162 | orchestrator | ok: [testbed-node-4] 2026-03-29 01:23:55.641167 | orchestrator | ok: [testbed-node-5] 2026-03-29 01:23:55.641173 | orchestrator | 2026-03-29 01:23:55.641179 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-29 01:23:55.641185 | orchestrator | Sunday 29 March 2026 01:23:50 +0000 (0:00:00.521) 0:00:20.747 ********** 2026-03-29 01:23:55.641191 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:55.641197 | orchestrator | 2026-03-29 01:23:55.641202 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-29 01:23:55.641207 | orchestrator | Sunday 29 March 2026 01:23:50 +0000 (0:00:00.266) 0:00:21.013 ********** 2026-03-29 01:23:55.641213 | orchestrator | skipping: [testbed-node-3] 2026-03-29 01:23:55.641218 | orchestrator | 2026-03-29 01:23:55.641224 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-29 01:23:55.641229 | orchestrator | Sunday 29 March 2026 01:23:50 +0000 (0:00:00.260) 0:00:21.274 ********** 2026-03-29 01:23:55.641234 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:55.641240 | orchestrator | 2026-03-29 01:23:55.641245 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-29 01:23:55.641251 | orchestrator | Sunday 29 March 2026 01:23:52 +0000 (0:00:01.561) 0:00:22.835 ********** 2026-03-29 01:23:55.641257 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:55.641268 | orchestrator | 2026-03-29 01:23:55.641275 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-29 01:23:55.641281 | orchestrator | Sunday 29 March 2026 01:23:52 +0000 (0:00:00.268) 0:00:23.104 ********** 2026-03-29 01:23:55.641288 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:55.641294 | orchestrator | 2026-03-29 01:23:55.641300 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:55.641306 | orchestrator | Sunday 29 March 2026 01:23:52 +0000 (0:00:00.257) 0:00:23.362 ********** 2026-03-29 01:23:55.641312 | orchestrator | 2026-03-29 01:23:55.641318 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:55.641325 | orchestrator | Sunday 29 March 2026 01:23:52 +0000 (0:00:00.071) 0:00:23.434 ********** 2026-03-29 01:23:55.641331 | orchestrator | 2026-03-29 01:23:55.641337 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-29 01:23:55.641344 | orchestrator | Sunday 29 March 2026 01:23:52 +0000 (0:00:00.068) 0:00:23.502 ********** 2026-03-29 01:23:55.641350 | orchestrator | 2026-03-29 01:23:55.641356 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-29 01:23:55.641363 | orchestrator | Sunday 29 March 2026 01:23:52 +0000 (0:00:00.073) 0:00:23.576 ********** 2026-03-29 01:23:55.641369 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-29 01:23:55.641374 | orchestrator | 2026-03-29 01:23:55.641380 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-29 01:23:55.641386 | orchestrator | Sunday 29 March 2026 01:23:54 +0000 (0:00:01.571) 0:00:25.148 ********** 2026-03-29 01:23:55.641405 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-29 01:23:55.641412 | orchestrator |  "msg": [ 2026-03-29 01:23:55.641417 | orchestrator |  "Validator run completed.", 2026-03-29 01:23:55.641424 | orchestrator |  "You can find the report file here:", 2026-03-29 01:23:55.641430 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-29T01:23:30+00:00-report.json", 2026-03-29 01:23:55.641437 | orchestrator |  "on the following host:", 2026-03-29 01:23:55.641443 | orchestrator |  "testbed-manager" 2026-03-29 01:23:55.641450 | orchestrator |  ] 2026-03-29 01:23:55.641456 | orchestrator | } 2026-03-29 01:23:55.641463 | orchestrator | 2026-03-29 01:23:55.641469 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:23:55.641476 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-29 01:23:55.641488 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:23:55.641495 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-29 01:23:55.641501 | orchestrator | 2026-03-29 01:23:55.641508 | orchestrator | 2026-03-29 01:23:55.641514 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:23:55.641520 | orchestrator | Sunday 29 March 2026 01:23:55 +0000 (0:00:00.743) 0:00:25.892 ********** 2026-03-29 01:23:55.641526 | orchestrator | =============================================================================== 2026-03-29 01:23:55.641532 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.24s 2026-03-29 01:23:55.641539 | orchestrator | Write report file ------------------------------------------------------- 1.57s 2026-03-29 01:23:55.641545 | orchestrator | Aggregate test results step one ----------------------------------------- 1.56s 2026-03-29 01:23:55.641552 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.36s 2026-03-29 01:23:55.641558 | orchestrator | Get timestamp for report file ------------------------------------------- 0.94s 2026-03-29 01:23:55.641565 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.80s 2026-03-29 01:23:55.641577 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.77s 2026-03-29 01:23:55.641584 | orchestrator | Print report file information ------------------------------------------- 0.74s 2026-03-29 01:23:55.641591 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2026-03-29 01:23:55.641598 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.70s 2026-03-29 01:23:55.641604 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2026-03-29 01:23:55.641611 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.56s 2026-03-29 01:23:55.641618 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.55s 2026-03-29 01:23:55.641625 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.52s 2026-03-29 01:23:55.641632 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.52s 2026-03-29 01:23:55.641639 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.51s 2026-03-29 01:23:55.641646 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2026-03-29 01:23:55.641653 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-03-29 01:23:55.641660 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.47s 2026-03-29 01:23:55.641667 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.33s 2026-03-29 01:23:56.029175 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-29 01:23:56.034922 | orchestrator | + set -e 2026-03-29 01:23:56.034970 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:23:56.034975 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:23:56.034979 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:23:56.034983 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:23:56.034987 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:23:56.034991 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:23:56.034995 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:23:56.034999 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:23:56.035003 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:23:56.035007 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:23:56.035010 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:23:56.035014 | orchestrator | ++ export ARA=false 2026-03-29 01:23:56.035018 | orchestrator | ++ ARA=false 2026-03-29 01:23:56.035022 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:23:56.035025 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:23:56.035029 | orchestrator | ++ export TEMPEST=true 2026-03-29 01:23:56.035033 | orchestrator | ++ TEMPEST=true 2026-03-29 01:23:56.035036 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:23:56.035040 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:23:56.035044 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 01:23:56.035047 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 01:23:56.035051 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:23:56.035055 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:23:56.035058 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:23:56.035062 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:23:56.035066 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:23:56.035069 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:23:56.035073 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:23:56.035077 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:23:56.035080 | orchestrator | + source /etc/os-release 2026-03-29 01:23:56.035084 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-29 01:23:56.035088 | orchestrator | ++ NAME=Ubuntu 2026-03-29 01:23:56.035091 | orchestrator | ++ VERSION_ID=24.04 2026-03-29 01:23:56.035095 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-29 01:23:56.035099 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-29 01:23:56.035102 | orchestrator | ++ ID=ubuntu 2026-03-29 01:23:56.035106 | orchestrator | ++ ID_LIKE=debian 2026-03-29 01:23:56.035110 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-29 01:23:56.035113 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-29 01:23:56.035117 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-29 01:23:56.035121 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-29 01:23:56.035136 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-29 01:23:56.035140 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-29 01:23:56.035144 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-29 01:23:56.035148 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-29 01:23:56.035152 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-29 01:23:56.055772 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-29 01:24:18.926080 | orchestrator | 2026-03-29 01:24:18.926154 | orchestrator | # Status of Elasticsearch 2026-03-29 01:24:18.926161 | orchestrator | 2026-03-29 01:24:18.926165 | orchestrator | + pushd /opt/configuration/contrib 2026-03-29 01:24:18.926170 | orchestrator | + echo 2026-03-29 01:24:18.926175 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-29 01:24:18.926179 | orchestrator | + echo 2026-03-29 01:24:18.926184 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-29 01:24:19.087758 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-29 01:24:19.087863 | orchestrator | 2026-03-29 01:24:19.087877 | orchestrator | # Status of MariaDB 2026-03-29 01:24:19.087925 | orchestrator | + echo 2026-03-29 01:24:19.087931 | orchestrator | + echo '# Status of MariaDB' 2026-03-29 01:24:19.087935 | orchestrator | + echo 2026-03-29 01:24:19.087940 | orchestrator | 2026-03-29 01:24:19.089432 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 01:24:19.144140 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 01:24:19.144214 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 01:24:19.144221 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-29 01:24:19.144227 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-29 01:24:19.213935 | orchestrator | Reading package lists... 2026-03-29 01:24:19.519051 | orchestrator | Building dependency tree... 2026-03-29 01:24:19.519565 | orchestrator | Reading state information... 2026-03-29 01:24:19.853998 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-29 01:24:19.854144 | orchestrator | bc set to manually installed. 2026-03-29 01:24:19.854153 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-29 01:24:20.490620 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-29 01:24:20.492709 | orchestrator | 2026-03-29 01:24:20.492784 | orchestrator | # Status of Prometheus 2026-03-29 01:24:20.492793 | orchestrator | 2026-03-29 01:24:20.492799 | orchestrator | + echo 2026-03-29 01:24:20.492806 | orchestrator | + echo '# Status of Prometheus' 2026-03-29 01:24:20.492814 | orchestrator | + echo 2026-03-29 01:24:20.492839 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-29 01:24:20.578335 | orchestrator | Unauthorized 2026-03-29 01:24:20.581699 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-29 01:24:20.640812 | orchestrator | Unauthorized 2026-03-29 01:24:20.647324 | orchestrator | 2026-03-29 01:24:20.647412 | orchestrator | # Status of RabbitMQ 2026-03-29 01:24:20.647423 | orchestrator | 2026-03-29 01:24:20.647431 | orchestrator | + echo 2026-03-29 01:24:20.647438 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-29 01:24:20.647444 | orchestrator | + echo 2026-03-29 01:24:20.648337 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-29 01:24:20.700006 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-29 01:24:20.700078 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 01:24:20.700086 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-29 01:24:21.141835 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-29 01:24:21.153044 | orchestrator | 2026-03-29 01:24:21.153118 | orchestrator | # Status of Redis 2026-03-29 01:24:21.153130 | orchestrator | 2026-03-29 01:24:21.153138 | orchestrator | + echo 2026-03-29 01:24:21.153145 | orchestrator | + echo '# Status of Redis' 2026-03-29 01:24:21.153153 | orchestrator | + echo 2026-03-29 01:24:21.153161 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-29 01:24:21.159739 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002012s;;;0.000000;10.000000 2026-03-29 01:24:21.159825 | orchestrator | + popd 2026-03-29 01:24:21.159834 | orchestrator | 2026-03-29 01:24:21.159841 | orchestrator | # Create backup of MariaDB database 2026-03-29 01:24:21.159847 | orchestrator | 2026-03-29 01:24:21.159853 | orchestrator | + echo 2026-03-29 01:24:21.159859 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-29 01:24:21.159866 | orchestrator | + echo 2026-03-29 01:24:21.159873 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-29 01:24:23.176098 | orchestrator | 2026-03-29 01:24:23 | INFO  | Task 8bf866c6-f018-4061-8371-0c3296666ac1 (mariadb_backup) was prepared for execution. 2026-03-29 01:24:23.176182 | orchestrator | 2026-03-29 01:24:23 | INFO  | It takes a moment until task 8bf866c6-f018-4061-8371-0c3296666ac1 (mariadb_backup) has been started and output is visible here. 2026-03-29 01:26:08.074773 | orchestrator | 2026-03-29 01:26:08.074871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-29 01:26:08.074879 | orchestrator | 2026-03-29 01:26:08.074886 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-29 01:26:08.074892 | orchestrator | Sunday 29 March 2026 01:24:27 +0000 (0:00:00.130) 0:00:00.130 ********** 2026-03-29 01:26:08.074898 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:26:08.074909 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:26:08.074916 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:26:08.074922 | orchestrator | 2026-03-29 01:26:08.074928 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-29 01:26:08.074935 | orchestrator | Sunday 29 March 2026 01:24:27 +0000 (0:00:00.278) 0:00:00.408 ********** 2026-03-29 01:26:08.074941 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-29 01:26:08.074949 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-29 01:26:08.074956 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-29 01:26:08.074962 | orchestrator | 2026-03-29 01:26:08.074969 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-29 01:26:08.074975 | orchestrator | 2026-03-29 01:26:08.074981 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-29 01:26:08.074986 | orchestrator | Sunday 29 March 2026 01:24:27 +0000 (0:00:00.472) 0:00:00.881 ********** 2026-03-29 01:26:08.074993 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-29 01:26:08.074999 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-29 01:26:08.075005 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-29 01:26:08.075011 | orchestrator | 2026-03-29 01:26:08.075017 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-29 01:26:08.075025 | orchestrator | Sunday 29 March 2026 01:24:28 +0000 (0:00:00.377) 0:00:01.258 ********** 2026-03-29 01:26:08.075032 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-29 01:26:08.075039 | orchestrator | 2026-03-29 01:26:08.075114 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-29 01:26:08.075123 | orchestrator | Sunday 29 March 2026 01:24:28 +0000 (0:00:00.458) 0:00:01.716 ********** 2026-03-29 01:26:08.075129 | orchestrator | ok: [testbed-node-1] 2026-03-29 01:26:08.075136 | orchestrator | ok: [testbed-node-0] 2026-03-29 01:26:08.075143 | orchestrator | ok: [testbed-node-2] 2026-03-29 01:26:08.075149 | orchestrator | 2026-03-29 01:26:08.075156 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-29 01:26:08.075162 | orchestrator | Sunday 29 March 2026 01:24:31 +0000 (0:00:02.776) 0:00:04.493 ********** 2026-03-29 01:26:08.075169 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-29 01:26:08.075174 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-29 01:26:08.075197 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-29 01:26:08.075208 | orchestrator | mariadb_bootstrap_restart 2026-03-29 01:26:08.075233 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:26:08.075239 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:26:08.075244 | orchestrator | changed: [testbed-node-0] 2026-03-29 01:26:08.075250 | orchestrator | 2026-03-29 01:26:08.075256 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-29 01:26:08.075262 | orchestrator | skipping: no hosts matched 2026-03-29 01:26:08.075267 | orchestrator | 2026-03-29 01:26:08.075273 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-29 01:26:08.075278 | orchestrator | skipping: no hosts matched 2026-03-29 01:26:08.075283 | orchestrator | 2026-03-29 01:26:08.075289 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-29 01:26:08.075294 | orchestrator | skipping: no hosts matched 2026-03-29 01:26:08.075299 | orchestrator | 2026-03-29 01:26:08.075305 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-29 01:26:08.075311 | orchestrator | 2026-03-29 01:26:08.075317 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-29 01:26:08.075323 | orchestrator | Sunday 29 March 2026 01:26:07 +0000 (0:01:35.566) 0:01:40.059 ********** 2026-03-29 01:26:08.075329 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:26:08.075335 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:26:08.075342 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:26:08.075349 | orchestrator | 2026-03-29 01:26:08.075355 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-29 01:26:08.075362 | orchestrator | Sunday 29 March 2026 01:26:07 +0000 (0:00:00.296) 0:01:40.355 ********** 2026-03-29 01:26:08.075368 | orchestrator | skipping: [testbed-node-0] 2026-03-29 01:26:08.075374 | orchestrator | skipping: [testbed-node-1] 2026-03-29 01:26:08.075380 | orchestrator | skipping: [testbed-node-2] 2026-03-29 01:26:08.075387 | orchestrator | 2026-03-29 01:26:08.075393 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:26:08.075401 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-29 01:26:08.075409 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:26:08.075416 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:26:08.075423 | orchestrator | 2026-03-29 01:26:08.075429 | orchestrator | 2026-03-29 01:26:08.075436 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:26:08.075442 | orchestrator | Sunday 29 March 2026 01:26:07 +0000 (0:00:00.393) 0:01:40.749 ********** 2026-03-29 01:26:08.075448 | orchestrator | =============================================================================== 2026-03-29 01:26:08.075456 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 95.57s 2026-03-29 01:26:08.075478 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.78s 2026-03-29 01:26:08.075483 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-29 01:26:08.075487 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.46s 2026-03-29 01:26:08.075500 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2026-03-29 01:26:08.075504 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2026-03-29 01:26:08.075509 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-03-29 01:26:08.075513 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-03-29 01:26:08.368664 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-29 01:26:08.375116 | orchestrator | + set -e 2026-03-29 01:26:08.375183 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-29 01:26:08.375193 | orchestrator | ++ export INTERACTIVE=false 2026-03-29 01:26:08.375213 | orchestrator | ++ INTERACTIVE=false 2026-03-29 01:26:08.375219 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-29 01:26:08.375226 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-29 01:26:08.376400 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-29 01:26:08.376443 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-29 01:26:08.382576 | orchestrator | 2026-03-29 01:26:08.382652 | orchestrator | # OpenStack endpoints 2026-03-29 01:26:08.382662 | orchestrator | 2026-03-29 01:26:08.382670 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:26:08.382677 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:26:08.382685 | orchestrator | + export OS_CLOUD=admin 2026-03-29 01:26:08.382691 | orchestrator | + OS_CLOUD=admin 2026-03-29 01:26:08.382707 | orchestrator | + echo 2026-03-29 01:26:08.382714 | orchestrator | + echo '# OpenStack endpoints' 2026-03-29 01:26:08.382721 | orchestrator | + echo 2026-03-29 01:26:08.382728 | orchestrator | + openstack endpoint list 2026-03-29 01:26:11.551936 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 01:26:11.551996 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-29 01:26:11.552004 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 01:26:11.552011 | orchestrator | | 041c63dca4cf4679ad220ecb90702ee1 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-29 01:26:11.552017 | orchestrator | | 0fe77e03648a4721b7084aa2594fa9e5 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-29 01:26:11.552036 | orchestrator | | 52d83698579549228fa198a11da90fc3 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-29 01:26:11.552097 | orchestrator | | 791ec39d489b49a3908c12cd4de0c3b0 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-29 01:26:11.552114 | orchestrator | | 8b5529b727c845bcbe8b6f5654125b4c | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-29 01:26:11.552124 | orchestrator | | 8cf8941c432c43d3aaf4cbf692524340 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-29 01:26:11.552132 | orchestrator | | 8ed0c872c1e242b697fd4c6f360c0d4c | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-29 01:26:11.552141 | orchestrator | | 9236b46adbec4694b254d7f475056957 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-29 01:26:11.552152 | orchestrator | | 9365e1f4b1b74a41a2f040a523d0f41b | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-29 01:26:11.552161 | orchestrator | | 9a469162c2ce464eb05cdda0a3f439cb | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-29 01:26:11.552172 | orchestrator | | 9d5ed0e8400f49ba9252c6d010ae31ef | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-29 01:26:11.552183 | orchestrator | | a1fa6d9916cd41db9faf4e4c19315c74 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-29 01:26:11.552193 | orchestrator | | a35be739a71d44e5990fe06c7603f8e9 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-29 01:26:11.552220 | orchestrator | | acf0097f181143c3adcb6d59e450eddd | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-29 01:26:11.552232 | orchestrator | | b9169f7ea8c949bf8fec0c81e8dcd8f2 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-29 01:26:11.552239 | orchestrator | | c1c2c85576ad420bbb96ce01f692fda7 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-29 01:26:11.552245 | orchestrator | | cf3a4665d388432698fe418d29ade220 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-29 01:26:11.552251 | orchestrator | | d1d09d681edf477ca17610304a9d3c43 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-29 01:26:11.552257 | orchestrator | | dc4f975c609d497a9ed9dcb90116f567 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-29 01:26:11.552263 | orchestrator | | dd201593f4c64642b44bde1c72432a08 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-29 01:26:11.552280 | orchestrator | | f5d93981e8344292b6736550345236d5 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-29 01:26:11.552286 | orchestrator | | fc161a93836142cc94c1a944c24a12f5 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-29 01:26:11.552292 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-29 01:26:11.787970 | orchestrator | 2026-03-29 01:26:11.788018 | orchestrator | # Cinder 2026-03-29 01:26:11.788023 | orchestrator | 2026-03-29 01:26:11.788028 | orchestrator | + echo 2026-03-29 01:26:11.788032 | orchestrator | + echo '# Cinder' 2026-03-29 01:26:11.788036 | orchestrator | + echo 2026-03-29 01:26:11.788040 | orchestrator | + openstack volume service list 2026-03-29 01:26:15.404567 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 01:26:15.404647 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-29 01:26:15.404653 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 01:26:15.404672 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-29T01:26:08.000000 | 2026-03-29 01:26:15.404677 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-29T01:26:08.000000 | 2026-03-29 01:26:15.404681 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-29T01:26:08.000000 | 2026-03-29 01:26:15.404685 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-29T01:26:07.000000 | 2026-03-29 01:26:15.404689 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-29T01:26:09.000000 | 2026-03-29 01:26:15.404693 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-29T01:26:11.000000 | 2026-03-29 01:26:15.404697 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-29T01:26:10.000000 | 2026-03-29 01:26:15.404701 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-29T01:26:11.000000 | 2026-03-29 01:26:15.404705 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-29T01:26:13.000000 | 2026-03-29 01:26:15.404709 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-29 01:26:15.643723 | orchestrator | 2026-03-29 01:26:15.643809 | orchestrator | # Neutron 2026-03-29 01:26:15.643816 | orchestrator | 2026-03-29 01:26:15.643821 | orchestrator | + echo 2026-03-29 01:26:15.643825 | orchestrator | + echo '# Neutron' 2026-03-29 01:26:15.643830 | orchestrator | + echo 2026-03-29 01:26:15.643834 | orchestrator | + openstack network agent list 2026-03-29 01:26:18.309255 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 01:26:18.309346 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-29 01:26:18.309354 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 01:26:18.309358 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-29 01:26:18.309363 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-29 01:26:18.309367 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-29 01:26:18.309370 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-29 01:26:18.309374 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-29 01:26:18.309378 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-29 01:26:18.309381 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 01:26:18.309385 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 01:26:18.309389 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-29 01:26:18.309392 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-29 01:26:18.569685 | orchestrator | + openstack network service provider list 2026-03-29 01:26:20.990393 | orchestrator | +---------------+------+---------+ 2026-03-29 01:26:20.990490 | orchestrator | | Service Type | Name | Default | 2026-03-29 01:26:20.990497 | orchestrator | +---------------+------+---------+ 2026-03-29 01:26:20.990501 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-29 01:26:20.990505 | orchestrator | +---------------+------+---------+ 2026-03-29 01:26:21.249566 | orchestrator | 2026-03-29 01:26:21.249654 | orchestrator | # Nova 2026-03-29 01:26:21.249662 | orchestrator | 2026-03-29 01:26:21.249666 | orchestrator | + echo 2026-03-29 01:26:21.249670 | orchestrator | + echo '# Nova' 2026-03-29 01:26:21.249674 | orchestrator | + echo 2026-03-29 01:26:21.249679 | orchestrator | + openstack compute service list 2026-03-29 01:26:24.381858 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 01:26:24.381949 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-29 01:26:24.381960 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 01:26:24.381967 | orchestrator | | 55c74b9c-64b4-4247-b17f-9f2aaa0a34b8 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-29T01:26:21.000000 | 2026-03-29 01:26:24.381974 | orchestrator | | 6d6a79a8-1e9e-47f9-8f4b-9207b90ea02c | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-29T01:26:22.000000 | 2026-03-29 01:26:24.382007 | orchestrator | | 38f2c0e0-5e72-4e51-b7b3-6961037f3f59 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-29T01:26:23.000000 | 2026-03-29 01:26:24.382056 | orchestrator | | 73139bbb-0b7d-4c64-9fbd-cd042334f85b | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-29T01:26:19.000000 | 2026-03-29 01:26:24.382083 | orchestrator | | 5c160e6f-dede-484c-b7bf-ad3f1cdc381f | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-29T01:26:21.000000 | 2026-03-29 01:26:24.382090 | orchestrator | | 84027ea0-068a-4ef6-8125-4bafe62f0242 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-29T01:26:21.000000 | 2026-03-29 01:26:24.382096 | orchestrator | | 70a15c2a-5bfe-43d5-9983-dd1e307a8f77 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-29T01:26:16.000000 | 2026-03-29 01:26:24.382102 | orchestrator | | 0a646c30-fd43-4515-b3c9-dd24c033957e | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-29T01:26:16.000000 | 2026-03-29 01:26:24.382108 | orchestrator | | 6a04fd6f-1ea3-48f8-976f-27a38d779355 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-29T01:26:17.000000 | 2026-03-29 01:26:24.382114 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-29 01:26:24.639509 | orchestrator | + openstack hypervisor list 2026-03-29 01:26:27.152590 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 01:26:27.152692 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-29 01:26:27.152703 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 01:26:27.152710 | orchestrator | | 2e3225ea-a8c5-438d-b003-a8a49496fde5 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-29 01:26:27.152716 | orchestrator | | 3dec7f46-66ce-417b-b08b-ee5346eb058f | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-29 01:26:27.152724 | orchestrator | | d45de630-7414-41c4-9314-60ab0e4070e8 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-29 01:26:27.152729 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-29 01:26:27.400168 | orchestrator | 2026-03-29 01:26:27.400251 | orchestrator | # Run OpenStack test play 2026-03-29 01:26:27.400262 | orchestrator | 2026-03-29 01:26:27.400269 | orchestrator | + echo 2026-03-29 01:26:27.400277 | orchestrator | + echo '# Run OpenStack test play' 2026-03-29 01:26:27.400285 | orchestrator | + echo 2026-03-29 01:26:27.400292 | orchestrator | + osism apply --environment openstack test 2026-03-29 01:26:29.341629 | orchestrator | 2026-03-29 01:26:29 | INFO  | Trying to run play test in environment openstack 2026-03-29 01:26:39.530612 | orchestrator | 2026-03-29 01:26:39 | INFO  | Task 76c6780e-acbf-4bb6-b946-9b8d3e97ee06 (test) was prepared for execution. 2026-03-29 01:26:39.530716 | orchestrator | 2026-03-29 01:26:39 | INFO  | It takes a moment until task 76c6780e-acbf-4bb6-b946-9b8d3e97ee06 (test) has been started and output is visible here. 2026-03-29 01:29:08.099369 | orchestrator | 2026-03-29 01:29:08.099442 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-29 01:29:08.099459 | orchestrator | 2026-03-29 01:29:08.099471 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-29 01:29:08.099482 | orchestrator | Sunday 29 March 2026 01:26:43 +0000 (0:00:00.080) 0:00:00.080 ********** 2026-03-29 01:29:08.099494 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099506 | orchestrator | 2026-03-29 01:29:08.099519 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-29 01:29:08.099531 | orchestrator | Sunday 29 March 2026 01:26:47 +0000 (0:00:03.684) 0:00:03.764 ********** 2026-03-29 01:29:08.099543 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099554 | orchestrator | 2026-03-29 01:29:08.099566 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-29 01:29:08.099598 | orchestrator | Sunday 29 March 2026 01:26:51 +0000 (0:00:03.999) 0:00:07.763 ********** 2026-03-29 01:29:08.099610 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099621 | orchestrator | 2026-03-29 01:29:08.099628 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-29 01:29:08.099635 | orchestrator | Sunday 29 March 2026 01:26:57 +0000 (0:00:06.239) 0:00:14.003 ********** 2026-03-29 01:29:08.099644 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099657 | orchestrator | 2026-03-29 01:29:08.099673 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-29 01:29:08.099684 | orchestrator | Sunday 29 March 2026 01:27:01 +0000 (0:00:03.892) 0:00:17.896 ********** 2026-03-29 01:29:08.099693 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099704 | orchestrator | 2026-03-29 01:29:08.099714 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-29 01:29:08.099723 | orchestrator | Sunday 29 March 2026 01:27:05 +0000 (0:00:04.064) 0:00:21.961 ********** 2026-03-29 01:29:08.099732 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-29 01:29:08.099741 | orchestrator | changed: [localhost] => (item=member) 2026-03-29 01:29:08.099751 | orchestrator | changed: [localhost] => (item=creator) 2026-03-29 01:29:08.099761 | orchestrator | 2026-03-29 01:29:08.099771 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-29 01:29:08.099781 | orchestrator | Sunday 29 March 2026 01:27:16 +0000 (0:00:11.061) 0:00:33.023 ********** 2026-03-29 01:29:08.099791 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099801 | orchestrator | 2026-03-29 01:29:08.099812 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-29 01:29:08.099823 | orchestrator | Sunday 29 March 2026 01:27:20 +0000 (0:00:04.054) 0:00:37.077 ********** 2026-03-29 01:29:08.099845 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099853 | orchestrator | 2026-03-29 01:29:08.099859 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-29 01:29:08.099865 | orchestrator | Sunday 29 March 2026 01:27:25 +0000 (0:00:04.712) 0:00:41.789 ********** 2026-03-29 01:29:08.099871 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099877 | orchestrator | 2026-03-29 01:29:08.099884 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-29 01:29:08.099890 | orchestrator | Sunday 29 March 2026 01:27:29 +0000 (0:00:04.085) 0:00:45.875 ********** 2026-03-29 01:29:08.099896 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099902 | orchestrator | 2026-03-29 01:29:08.099908 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-29 01:29:08.099917 | orchestrator | Sunday 29 March 2026 01:27:33 +0000 (0:00:03.795) 0:00:49.670 ********** 2026-03-29 01:29:08.099930 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099947 | orchestrator | 2026-03-29 01:29:08.099956 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-29 01:29:08.099966 | orchestrator | Sunday 29 March 2026 01:27:37 +0000 (0:00:03.895) 0:00:53.565 ********** 2026-03-29 01:29:08.099975 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.099986 | orchestrator | 2026-03-29 01:29:08.099996 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-29 01:29:08.100005 | orchestrator | Sunday 29 March 2026 01:27:40 +0000 (0:00:03.688) 0:00:57.254 ********** 2026-03-29 01:29:08.100014 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.100023 | orchestrator | 2026-03-29 01:29:08.100033 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-29 01:29:08.100043 | orchestrator | Sunday 29 March 2026 01:27:45 +0000 (0:00:04.406) 0:01:01.660 ********** 2026-03-29 01:29:08.100055 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.100066 | orchestrator | 2026-03-29 01:29:08.100077 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-29 01:29:08.100088 | orchestrator | Sunday 29 March 2026 01:27:50 +0000 (0:00:04.922) 0:01:06.583 ********** 2026-03-29 01:29:08.100098 | orchestrator | changed: [localhost] 2026-03-29 01:29:08.100123 | orchestrator | 2026-03-29 01:29:08.100136 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-29 01:29:08.100147 | orchestrator | 2026-03-29 01:29:08.100159 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-29 01:29:08.100170 | orchestrator | Sunday 29 March 2026 01:27:59 +0000 (0:00:09.192) 0:01:15.775 ********** 2026-03-29 01:29:08.100180 | orchestrator | ok: [localhost] 2026-03-29 01:29:08.100189 | orchestrator | 2026-03-29 01:29:08.100196 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-29 01:29:08.100206 | orchestrator | Sunday 29 March 2026 01:28:02 +0000 (0:00:03.400) 0:01:19.176 ********** 2026-03-29 01:29:08.100214 | orchestrator | skipping: [localhost] 2026-03-29 01:29:08.100241 | orchestrator | 2026-03-29 01:29:08.100253 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-29 01:29:08.100264 | orchestrator | Sunday 29 March 2026 01:28:02 +0000 (0:00:00.065) 0:01:19.241 ********** 2026-03-29 01:29:08.100274 | orchestrator | skipping: [localhost] 2026-03-29 01:29:08.100285 | orchestrator | 2026-03-29 01:29:08.100295 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-29 01:29:08.100307 | orchestrator | Sunday 29 March 2026 01:28:02 +0000 (0:00:00.056) 0:01:19.297 ********** 2026-03-29 01:29:08.100313 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-29 01:29:08.100319 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-29 01:29:08.100341 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-29 01:29:08.100348 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-29 01:29:08.100354 | orchestrator | skipping: [localhost] => (item=test)  2026-03-29 01:29:08.100360 | orchestrator | skipping: [localhost] 2026-03-29 01:29:08.100366 | orchestrator | 2026-03-29 01:29:08.100372 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-29 01:29:08.100378 | orchestrator | Sunday 29 March 2026 01:28:03 +0000 (0:00:00.160) 0:01:19.458 ********** 2026-03-29 01:29:08.100384 | orchestrator | skipping: [localhost] 2026-03-29 01:29:08.100391 | orchestrator | 2026-03-29 01:29:08.100397 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-29 01:29:08.100403 | orchestrator | Sunday 29 March 2026 01:28:03 +0000 (0:00:00.162) 0:01:19.620 ********** 2026-03-29 01:29:08.100409 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 01:29:08.100415 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 01:29:08.100421 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 01:29:08.100427 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 01:29:08.100433 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 01:29:08.100440 | orchestrator | 2026-03-29 01:29:08.100446 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-29 01:29:08.100452 | orchestrator | Sunday 29 March 2026 01:28:07 +0000 (0:00:04.708) 0:01:24.329 ********** 2026-03-29 01:29:08.100458 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-29 01:29:08.100465 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-29 01:29:08.100471 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-29 01:29:08.100477 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-29 01:29:08.100485 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j239302849986.2664', 'results_file': '/ansible/.ansible_async/j239302849986.2664', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100506 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j499887092201.2689', 'results_file': '/ansible/.ansible_async/j499887092201.2689', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100521 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j906430522728.2714', 'results_file': '/ansible/.ansible_async/j906430522728.2714', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100539 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j335669895699.2739', 'results_file': '/ansible/.ansible_async/j335669895699.2739', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100549 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j237535490787.2764', 'results_file': '/ansible/.ansible_async/j237535490787.2764', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100559 | orchestrator | 2026-03-29 01:29:08.100570 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-29 01:29:08.100581 | orchestrator | Sunday 29 March 2026 01:28:54 +0000 (0:00:46.581) 0:02:10.910 ********** 2026-03-29 01:29:08.100592 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 01:29:08.100601 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 01:29:08.100612 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 01:29:08.100623 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 01:29:08.100635 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 01:29:08.100646 | orchestrator | 2026-03-29 01:29:08.100655 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-29 01:29:08.100664 | orchestrator | Sunday 29 March 2026 01:28:58 +0000 (0:00:04.409) 0:02:15.319 ********** 2026-03-29 01:29:08.100677 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-29 01:29:08.100691 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j250518184226.2868', 'results_file': '/ansible/.ansible_async/j250518184226.2868', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100702 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j477231686228.2893', 'results_file': '/ansible/.ansible_async/j477231686228.2893', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100712 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j562650573287.2918', 'results_file': '/ansible/.ansible_async/j562650573287.2918', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:08.100731 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j702732888265.2943', 'results_file': '/ansible/.ansible_async/j702732888265.2943', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:46.895912 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j422372058853.2968', 'results_file': '/ansible/.ansible_async/j422372058853.2968', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:46.895973 | orchestrator | 2026-03-29 01:29:46.895981 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-29 01:29:46.895986 | orchestrator | Sunday 29 March 2026 01:29:08 +0000 (0:00:09.134) 0:02:24.454 ********** 2026-03-29 01:29:46.895990 | orchestrator | changed: [localhost] => (item=test) 2026-03-29 01:29:46.895996 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-29 01:29:46.896003 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-29 01:29:46.896009 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-29 01:29:46.896016 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-29 01:29:46.896022 | orchestrator | 2026-03-29 01:29:46.896029 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-29 01:29:46.896035 | orchestrator | Sunday 29 March 2026 01:29:12 +0000 (0:00:04.224) 0:02:28.679 ********** 2026-03-29 01:29:46.896055 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-29 01:29:46.896063 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j565848644068.3037', 'results_file': '/ansible/.ansible_async/j565848644068.3037', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:46.896070 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j480686772786.3062', 'results_file': '/ansible/.ansible_async/j480686772786.3062', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:46.896084 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j747026435277.3088', 'results_file': '/ansible/.ansible_async/j747026435277.3088', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:46.896091 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j90045040475.3114', 'results_file': '/ansible/.ansible_async/j90045040475.3114', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:46.896098 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j431355827451.3140', 'results_file': '/ansible/.ansible_async/j431355827451.3140', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-29 01:29:46.896104 | orchestrator | 2026-03-29 01:29:46.896110 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-29 01:29:46.896117 | orchestrator | Sunday 29 March 2026 01:29:21 +0000 (0:00:09.511) 0:02:38.190 ********** 2026-03-29 01:29:46.896123 | orchestrator | changed: [localhost] 2026-03-29 01:29:46.896130 | orchestrator | 2026-03-29 01:29:46.896136 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-29 01:29:46.896140 | orchestrator | Sunday 29 March 2026 01:29:28 +0000 (0:00:06.495) 0:02:44.685 ********** 2026-03-29 01:29:46.896144 | orchestrator | changed: [localhost] 2026-03-29 01:29:46.896148 | orchestrator | 2026-03-29 01:29:46.896152 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-29 01:29:46.896156 | orchestrator | Sunday 29 March 2026 01:29:41 +0000 (0:00:13.238) 0:02:57.924 ********** 2026-03-29 01:29:46.896161 | orchestrator | ok: [localhost] 2026-03-29 01:29:46.896167 | orchestrator | 2026-03-29 01:29:46.896173 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-29 01:29:46.896179 | orchestrator | Sunday 29 March 2026 01:29:46 +0000 (0:00:05.046) 0:03:02.970 ********** 2026-03-29 01:29:46.896186 | orchestrator | ok: [localhost] => { 2026-03-29 01:29:46.896192 | orchestrator |  "msg": "192.168.112.112" 2026-03-29 01:29:46.896199 | orchestrator | } 2026-03-29 01:29:46.896205 | orchestrator | 2026-03-29 01:29:46.896211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:29:46.896219 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-29 01:29:46.896226 | orchestrator | 2026-03-29 01:29:46.896232 | orchestrator | 2026-03-29 01:29:46.896238 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:29:46.896244 | orchestrator | Sunday 29 March 2026 01:29:46 +0000 (0:00:00.053) 0:03:03.024 ********** 2026-03-29 01:29:46.896295 | orchestrator | =============================================================================== 2026-03-29 01:29:46.896302 | orchestrator | Wait for instance creation to complete --------------------------------- 46.58s 2026-03-29 01:29:46.896309 | orchestrator | Attach test volume ----------------------------------------------------- 13.24s 2026-03-29 01:29:46.896316 | orchestrator | Add member roles to user test ------------------------------------------ 11.06s 2026-03-29 01:29:46.896323 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.51s 2026-03-29 01:29:46.896330 | orchestrator | Create test router ------------------------------------------------------ 9.19s 2026-03-29 01:29:46.896342 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.13s 2026-03-29 01:29:46.896346 | orchestrator | Create test volume ------------------------------------------------------ 6.50s 2026-03-29 01:29:46.896359 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.24s 2026-03-29 01:29:46.896363 | orchestrator | Create floating ip address ---------------------------------------------- 5.05s 2026-03-29 01:29:46.896367 | orchestrator | Create test subnet ------------------------------------------------------ 4.92s 2026-03-29 01:29:46.896370 | orchestrator | Create ssh security group ----------------------------------------------- 4.71s 2026-03-29 01:29:46.896374 | orchestrator | Create test instances --------------------------------------------------- 4.71s 2026-03-29 01:29:46.896378 | orchestrator | Add metadata to instances ----------------------------------------------- 4.41s 2026-03-29 01:29:46.896381 | orchestrator | Create test network ----------------------------------------------------- 4.41s 2026-03-29 01:29:46.896385 | orchestrator | Add tag to instances ---------------------------------------------------- 4.22s 2026-03-29 01:29:46.896389 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.09s 2026-03-29 01:29:46.896394 | orchestrator | Create test user -------------------------------------------------------- 4.06s 2026-03-29 01:29:46.896400 | orchestrator | Create test server group ------------------------------------------------ 4.05s 2026-03-29 01:29:46.896406 | orchestrator | Create test-admin user -------------------------------------------------- 4.00s 2026-03-29 01:29:46.896412 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.90s 2026-03-29 01:29:47.177823 | orchestrator | + server_list 2026-03-29 01:29:47.177879 | orchestrator | + openstack --os-cloud test server list 2026-03-29 01:29:50.884235 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 01:29:50.884428 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-29 01:29:50.884443 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 01:29:50.884450 | orchestrator | | 052c9d25-e943-420b-9e0f-5be8a3b2994d | test-3 | ACTIVE | test=192.168.112.179, 192.168.200.171 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:29:50.884476 | orchestrator | | 6d003b0e-9b0c-4948-849d-79369049ef9c | test-4 | ACTIVE | test=192.168.112.133, 192.168.200.23 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:29:50.884483 | orchestrator | | 5b392894-f2cf-4fff-8479-567c1106ad5d | test-2 | ACTIVE | test=192.168.112.145, 192.168.200.211 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:29:50.884489 | orchestrator | | 7dad5e73-82df-4f14-b022-cceafcecff64 | test-1 | ACTIVE | test=192.168.112.148, 192.168.200.47 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:29:50.884495 | orchestrator | | fbf4e46c-c876-4054-8588-d3a98f00639c | test | ACTIVE | test=192.168.112.112, 192.168.200.163 | N/A (booted from volume) | SCS-1L-1 | 2026-03-29 01:29:50.884500 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-29 01:29:51.144800 | orchestrator | + openstack --os-cloud test server show test 2026-03-29 01:29:53.957235 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:53.957332 | orchestrator | | Field | Value | 2026-03-29 01:29:53.957347 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:53.957352 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:29:53.957356 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:29:53.957360 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:29:53.957364 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-29 01:29:53.957367 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:29:53.957371 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:29:53.957384 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:29:53.957388 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:29:53.957398 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:29:53.957402 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:29:53.957406 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:29:53.957410 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:29:53.957414 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:29:53.957418 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:29:53.957422 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:29:53.957428 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:28:38.000000 | 2026-03-29 01:29:53.957435 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:29:53.957439 | orchestrator | | accessIPv4 | | 2026-03-29 01:29:53.957445 | orchestrator | | accessIPv6 | | 2026-03-29 01:29:53.957449 | orchestrator | | addresses | test=192.168.112.112, 192.168.200.163 | 2026-03-29 01:29:53.957453 | orchestrator | | config_drive | | 2026-03-29 01:29:53.957457 | orchestrator | | created | 2026-03-29T01:28:12Z | 2026-03-29 01:29:53.957461 | orchestrator | | description | None | 2026-03-29 01:29:53.957465 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:29:53.957468 | orchestrator | | hostId | 54f43fa020c2fe956795042ef38f890ac6e1b3248e9d338c654fb882 | 2026-03-29 01:29:53.957474 | orchestrator | | host_status | None | 2026-03-29 01:29:53.957481 | orchestrator | | id | fbf4e46c-c876-4054-8588-d3a98f00639c | 2026-03-29 01:29:53.957487 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:29:53.957491 | orchestrator | | key_name | test | 2026-03-29 01:29:53.957495 | orchestrator | | locked | False | 2026-03-29 01:29:53.957499 | orchestrator | | locked_reason | None | 2026-03-29 01:29:53.957503 | orchestrator | | name | test | 2026-03-29 01:29:53.957507 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:29:53.957510 | orchestrator | | progress | 0 | 2026-03-29 01:29:53.957519 | orchestrator | | project_id | a563122a5b1243d395e473077b1594e0 | 2026-03-29 01:29:53.957525 | orchestrator | | properties | hostname='test' | 2026-03-29 01:29:53.957544 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:29:53.957550 | orchestrator | | | name='ssh' | 2026-03-29 01:29:53.957557 | orchestrator | | server_groups | None | 2026-03-29 01:29:53.957564 | orchestrator | | status | ACTIVE | 2026-03-29 01:29:53.957570 | orchestrator | | tags | test | 2026-03-29 01:29:53.957577 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:29:53.957583 | orchestrator | | updated | 2026-03-29T01:29:00Z | 2026-03-29 01:29:53.957589 | orchestrator | | user_id | 61f8aa7e419b4d3fb2b55aeaaa3e419d | 2026-03-29 01:29:53.957597 | orchestrator | | volumes_attached | delete_on_termination='True', id='a19f354f-60cc-4163-955a-12124b0a964f' | 2026-03-29 01:29:53.957605 | orchestrator | | | delete_on_termination='False', id='0fbf1b54-fcf0-4b83-88b2-a7f7cb888883' | 2026-03-29 01:29:53.961226 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:54.188546 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-29 01:29:56.885052 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:56.885114 | orchestrator | | Field | Value | 2026-03-29 01:29:56.885124 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:56.885132 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:29:56.885137 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:29:56.885141 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:29:56.885145 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-29 01:29:56.885162 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:29:56.885167 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:29:56.885180 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:29:56.885184 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:29:56.885188 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:29:56.885192 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:29:56.885196 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:29:56.885200 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:29:56.885211 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:29:56.885216 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:29:56.885225 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:29:56.885229 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:28:38.000000 | 2026-03-29 01:29:56.885235 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:29:56.885239 | orchestrator | | accessIPv4 | | 2026-03-29 01:29:56.885243 | orchestrator | | accessIPv6 | | 2026-03-29 01:29:56.885247 | orchestrator | | addresses | test=192.168.112.148, 192.168.200.47 | 2026-03-29 01:29:56.885251 | orchestrator | | config_drive | | 2026-03-29 01:29:56.885309 | orchestrator | | created | 2026-03-29T01:28:12Z | 2026-03-29 01:29:56.885314 | orchestrator | | description | None | 2026-03-29 01:29:56.885321 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:29:56.885327 | orchestrator | | hostId | 54f43fa020c2fe956795042ef38f890ac6e1b3248e9d338c654fb882 | 2026-03-29 01:29:56.885331 | orchestrator | | host_status | None | 2026-03-29 01:29:56.885339 | orchestrator | | id | 7dad5e73-82df-4f14-b022-cceafcecff64 | 2026-03-29 01:29:56.885343 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:29:56.885347 | orchestrator | | key_name | test | 2026-03-29 01:29:56.885351 | orchestrator | | locked | False | 2026-03-29 01:29:56.885354 | orchestrator | | locked_reason | None | 2026-03-29 01:29:56.885358 | orchestrator | | name | test-1 | 2026-03-29 01:29:56.885364 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:29:56.885368 | orchestrator | | progress | 0 | 2026-03-29 01:29:56.885375 | orchestrator | | project_id | a563122a5b1243d395e473077b1594e0 | 2026-03-29 01:29:56.885378 | orchestrator | | properties | hostname='test-1' | 2026-03-29 01:29:56.885385 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:29:56.885389 | orchestrator | | | name='ssh' | 2026-03-29 01:29:56.885393 | orchestrator | | server_groups | None | 2026-03-29 01:29:56.885397 | orchestrator | | status | ACTIVE | 2026-03-29 01:29:56.885401 | orchestrator | | tags | test | 2026-03-29 01:29:56.885408 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:29:56.885412 | orchestrator | | updated | 2026-03-29T01:29:00Z | 2026-03-29 01:29:56.885416 | orchestrator | | user_id | 61f8aa7e419b4d3fb2b55aeaaa3e419d | 2026-03-29 01:29:56.885422 | orchestrator | | volumes_attached | delete_on_termination='True', id='866f74f9-d03b-432f-b1a9-6a1b371b1741' | 2026-03-29 01:29:56.890532 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:57.110046 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-29 01:29:59.759183 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:59.759301 | orchestrator | | Field | Value | 2026-03-29 01:29:59.759316 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:59.759323 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:29:59.759330 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:29:59.759353 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:29:59.759358 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-29 01:29:59.759370 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:29:59.759374 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:29:59.759389 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:29:59.759393 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:29:59.759397 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:29:59.759401 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:29:59.759405 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:29:59.759412 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:29:59.759416 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:29:59.759420 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:29:59.759424 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:29:59.759428 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:28:39.000000 | 2026-03-29 01:29:59.759435 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:29:59.759439 | orchestrator | | accessIPv4 | | 2026-03-29 01:29:59.759443 | orchestrator | | accessIPv6 | | 2026-03-29 01:29:59.759447 | orchestrator | | addresses | test=192.168.112.145, 192.168.200.211 | 2026-03-29 01:29:59.759453 | orchestrator | | config_drive | | 2026-03-29 01:29:59.759457 | orchestrator | | created | 2026-03-29T01:28:13Z | 2026-03-29 01:29:59.759461 | orchestrator | | description | None | 2026-03-29 01:29:59.759465 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:29:59.759475 | orchestrator | | hostId | 3d62a19bf25aa232e4091b18beabfa25245c54db937dfdbf66287947 | 2026-03-29 01:29:59.759479 | orchestrator | | host_status | None | 2026-03-29 01:29:59.759486 | orchestrator | | id | 5b392894-f2cf-4fff-8479-567c1106ad5d | 2026-03-29 01:29:59.759490 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:29:59.759494 | orchestrator | | key_name | test | 2026-03-29 01:29:59.759500 | orchestrator | | locked | False | 2026-03-29 01:29:59.759504 | orchestrator | | locked_reason | None | 2026-03-29 01:29:59.759508 | orchestrator | | name | test-2 | 2026-03-29 01:29:59.759512 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:29:59.759516 | orchestrator | | progress | 0 | 2026-03-29 01:29:59.759521 | orchestrator | | project_id | a563122a5b1243d395e473077b1594e0 | 2026-03-29 01:29:59.759525 | orchestrator | | properties | hostname='test-2' | 2026-03-29 01:29:59.759532 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:29:59.759536 | orchestrator | | | name='ssh' | 2026-03-29 01:29:59.759540 | orchestrator | | server_groups | None | 2026-03-29 01:29:59.759546 | orchestrator | | status | ACTIVE | 2026-03-29 01:29:59.759550 | orchestrator | | tags | test | 2026-03-29 01:29:59.759554 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:29:59.759557 | orchestrator | | updated | 2026-03-29T01:29:01Z | 2026-03-29 01:29:59.759561 | orchestrator | | user_id | 61f8aa7e419b4d3fb2b55aeaaa3e419d | 2026-03-29 01:29:59.759567 | orchestrator | | volumes_attached | delete_on_termination='True', id='35a5019c-487d-4893-acc8-ce0ed38c83f4' | 2026-03-29 01:29:59.763897 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:29:59.996466 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-29 01:30:02.741096 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:30:02.741155 | orchestrator | | Field | Value | 2026-03-29 01:30:02.741177 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:30:02.741184 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:30:02.741188 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:30:02.741191 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:30:02.741194 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-29 01:30:02.741201 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:30:02.741210 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:30:02.741221 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:30:02.741225 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:30:02.741231 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:30:02.741234 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:30:02.741237 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:30:02.741241 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:30:02.741244 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:30:02.741247 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:30:02.741250 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:30:02.741267 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:28:40.000000 | 2026-03-29 01:30:02.741274 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:30:02.741283 | orchestrator | | accessIPv4 | | 2026-03-29 01:30:02.741287 | orchestrator | | accessIPv6 | | 2026-03-29 01:30:02.741290 | orchestrator | | addresses | test=192.168.112.179, 192.168.200.171 | 2026-03-29 01:30:02.741293 | orchestrator | | config_drive | | 2026-03-29 01:30:02.741297 | orchestrator | | created | 2026-03-29T01:28:16Z | 2026-03-29 01:30:02.741300 | orchestrator | | description | None | 2026-03-29 01:30:02.741303 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:30:02.741306 | orchestrator | | hostId | 06f2781cce212f781f5fb6a79d1527d535e0ae5c43f37fa3ca22b785 | 2026-03-29 01:30:02.741311 | orchestrator | | host_status | None | 2026-03-29 01:30:02.741320 | orchestrator | | id | 052c9d25-e943-420b-9e0f-5be8a3b2994d | 2026-03-29 01:30:02.741323 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:30:02.741326 | orchestrator | | key_name | test | 2026-03-29 01:30:02.741330 | orchestrator | | locked | False | 2026-03-29 01:30:02.741333 | orchestrator | | locked_reason | None | 2026-03-29 01:30:02.741336 | orchestrator | | name | test-3 | 2026-03-29 01:30:02.741340 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:30:02.741343 | orchestrator | | progress | 0 | 2026-03-29 01:30:02.741346 | orchestrator | | project_id | a563122a5b1243d395e473077b1594e0 | 2026-03-29 01:30:02.741350 | orchestrator | | properties | hostname='test-3' | 2026-03-29 01:30:02.741358 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:30:02.741361 | orchestrator | | | name='ssh' | 2026-03-29 01:30:02.741364 | orchestrator | | server_groups | None | 2026-03-29 01:30:02.741368 | orchestrator | | status | ACTIVE | 2026-03-29 01:30:02.741371 | orchestrator | | tags | test | 2026-03-29 01:30:02.741374 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:30:02.741383 | orchestrator | | updated | 2026-03-29T01:29:02Z | 2026-03-29 01:30:02.741390 | orchestrator | | user_id | 61f8aa7e419b4d3fb2b55aeaaa3e419d | 2026-03-29 01:30:02.741541 | orchestrator | | volumes_attached | delete_on_termination='True', id='8ce03b3e-b64a-4952-8632-bd4cb3dc3f45' | 2026-03-29 01:30:02.744392 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:30:02.999550 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-29 01:30:05.878496 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:30:05.878556 | orchestrator | | Field | Value | 2026-03-29 01:30:05.878565 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:30:05.878572 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-29 01:30:05.878580 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-29 01:30:05.878590 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-29 01:30:05.878597 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-29 01:30:05.878603 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-29 01:30:05.878620 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-29 01:30:05.878636 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-29 01:30:05.878643 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-29 01:30:05.878650 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-29 01:30:05.878656 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-29 01:30:05.878663 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-29 01:30:05.878670 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-29 01:30:05.878679 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-29 01:30:05.878685 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-29 01:30:05.878695 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-29 01:30:05.878702 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-29T01:28:39.000000 | 2026-03-29 01:30:05.878712 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-29 01:30:05.878718 | orchestrator | | accessIPv4 | | 2026-03-29 01:30:05.878725 | orchestrator | | accessIPv6 | | 2026-03-29 01:30:05.878731 | orchestrator | | addresses | test=192.168.112.133, 192.168.200.23 | 2026-03-29 01:30:05.878738 | orchestrator | | config_drive | | 2026-03-29 01:30:05.878745 | orchestrator | | created | 2026-03-29T01:28:14Z | 2026-03-29 01:30:05.878754 | orchestrator | | description | None | 2026-03-29 01:30:05.878765 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-29 01:30:05.878771 | orchestrator | | hostId | 3d62a19bf25aa232e4091b18beabfa25245c54db937dfdbf66287947 | 2026-03-29 01:30:05.878778 | orchestrator | | host_status | None | 2026-03-29 01:30:05.878787 | orchestrator | | id | 6d003b0e-9b0c-4948-849d-79369049ef9c | 2026-03-29 01:30:05.878794 | orchestrator | | image | N/A (booted from volume) | 2026-03-29 01:30:05.878800 | orchestrator | | key_name | test | 2026-03-29 01:30:05.878807 | orchestrator | | locked | False | 2026-03-29 01:30:05.878813 | orchestrator | | locked_reason | None | 2026-03-29 01:30:05.878820 | orchestrator | | name | test-4 | 2026-03-29 01:30:05.878829 | orchestrator | | pinned_availability_zone | None | 2026-03-29 01:30:05.878840 | orchestrator | | progress | 0 | 2026-03-29 01:30:05.878847 | orchestrator | | project_id | a563122a5b1243d395e473077b1594e0 | 2026-03-29 01:30:05.878853 | orchestrator | | properties | hostname='test-4' | 2026-03-29 01:30:05.878863 | orchestrator | | security_groups | name='icmp' | 2026-03-29 01:30:05.878870 | orchestrator | | | name='ssh' | 2026-03-29 01:30:05.878876 | orchestrator | | server_groups | None | 2026-03-29 01:30:05.878882 | orchestrator | | status | ACTIVE | 2026-03-29 01:30:05.878889 | orchestrator | | tags | test | 2026-03-29 01:30:05.878895 | orchestrator | | trusted_image_certificates | None | 2026-03-29 01:30:05.878907 | orchestrator | | updated | 2026-03-29T01:29:02Z | 2026-03-29 01:30:05.878914 | orchestrator | | user_id | 61f8aa7e419b4d3fb2b55aeaaa3e419d | 2026-03-29 01:30:05.878921 | orchestrator | | volumes_attached | delete_on_termination='True', id='09c038cd-c313-4d69-9e10-ac9d3cbe03c8' | 2026-03-29 01:30:05.882503 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-29 01:30:06.163716 | orchestrator | + server_ping 2026-03-29 01:30:06.165118 | orchestrator | ++ tr -d '\r' 2026-03-29 01:30:06.165186 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-29 01:30:08.926949 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:08.926999 | orchestrator | + ping -c3 192.168.112.133 2026-03-29 01:30:08.941150 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-03-29 01:30:08.941212 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=9.61 ms 2026-03-29 01:30:09.935315 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=1.83 ms 2026-03-29 01:30:10.935590 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.57 ms 2026-03-29 01:30:10.935644 | orchestrator | 2026-03-29 01:30:10.935652 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-03-29 01:30:10.935659 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:30:10.935663 | orchestrator | rtt min/avg/max/mdev = 1.567/4.335/9.608/3.730 ms 2026-03-29 01:30:10.936078 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:10.936088 | orchestrator | + ping -c3 192.168.112.179 2026-03-29 01:30:10.945415 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-03-29 01:30:10.945471 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=4.68 ms 2026-03-29 01:30:11.944359 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.65 ms 2026-03-29 01:30:12.946813 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.49 ms 2026-03-29 01:30:12.946864 | orchestrator | 2026-03-29 01:30:12.946870 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-03-29 01:30:12.946875 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-29 01:30:12.946879 | orchestrator | rtt min/avg/max/mdev = 1.494/2.610/4.684/1.467 ms 2026-03-29 01:30:12.946889 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:12.946908 | orchestrator | + ping -c3 192.168.112.145 2026-03-29 01:30:12.959663 | orchestrator | PING 192.168.112.145 (192.168.112.145) 56(84) bytes of data. 2026-03-29 01:30:12.959722 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=1 ttl=63 time=7.72 ms 2026-03-29 01:30:13.955903 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=2 ttl=63 time=1.73 ms 2026-03-29 01:30:14.957592 | orchestrator | 64 bytes from 192.168.112.145: icmp_seq=3 ttl=63 time=1.48 ms 2026-03-29 01:30:14.957656 | orchestrator | 2026-03-29 01:30:14.957666 | orchestrator | --- 192.168.112.145 ping statistics --- 2026-03-29 01:30:14.957674 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-29 01:30:14.957681 | orchestrator | rtt min/avg/max/mdev = 1.484/3.644/7.719/2.882 ms 2026-03-29 01:30:14.957695 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:14.957703 | orchestrator | + ping -c3 192.168.112.112 2026-03-29 01:30:14.968146 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2026-03-29 01:30:14.968194 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.94 ms 2026-03-29 01:30:15.965381 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=1.98 ms 2026-03-29 01:30:16.966602 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.35 ms 2026-03-29 01:30:16.966738 | orchestrator | 2026-03-29 01:30:16.966751 | orchestrator | --- 192.168.112.112 ping statistics --- 2026-03-29 01:30:16.966760 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-29 01:30:16.966767 | orchestrator | rtt min/avg/max/mdev = 1.347/3.087/5.937/2.031 ms 2026-03-29 01:30:16.966784 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-29 01:30:16.966794 | orchestrator | + ping -c3 192.168.112.148 2026-03-29 01:30:16.981313 | orchestrator | PING 192.168.112.148 (192.168.112.148) 56(84) bytes of data. 2026-03-29 01:30:16.981364 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=1 ttl=63 time=10.1 ms 2026-03-29 01:30:17.975186 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=2 ttl=63 time=2.45 ms 2026-03-29 01:30:18.976736 | orchestrator | 64 bytes from 192.168.112.148: icmp_seq=3 ttl=63 time=1.60 ms 2026-03-29 01:30:18.976831 | orchestrator | 2026-03-29 01:30:18.976841 | orchestrator | --- 192.168.112.148 ping statistics --- 2026-03-29 01:30:18.976850 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-29 01:30:18.976857 | orchestrator | rtt min/avg/max/mdev = 1.603/4.717/10.098/3.820 ms 2026-03-29 01:30:18.976873 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-29 01:30:19.174262 | orchestrator | ok: Runtime: 0:08:52.752135 2026-03-29 01:30:19.241663 | 2026-03-29 01:30:19.241868 | TASK [Run tempest] 2026-03-29 01:30:20.049466 | orchestrator | + set -e 2026-03-29 01:30:20.049610 | orchestrator | + source /opt/manager-vars.sh 2026-03-29 01:30:20.050441 | orchestrator | 2026-03-29 01:30:20.050484 | orchestrator | # Tempest 2026-03-29 01:30:20.050490 | orchestrator | 2026-03-29 01:30:20.050495 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-29 01:30:20.050506 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-29 01:30:20.050527 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-29 01:30:20.050536 | orchestrator | ++ CEPH_VERSION=reef 2026-03-29 01:30:20.050543 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-29 01:30:20.050548 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-29 01:30:20.050557 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-29 01:30:20.050563 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-29 01:30:20.050567 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-29 01:30:20.050574 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-29 01:30:20.050578 | orchestrator | ++ export ARA=false 2026-03-29 01:30:20.050583 | orchestrator | ++ ARA=false 2026-03-29 01:30:20.050592 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-29 01:30:20.050596 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-29 01:30:20.050600 | orchestrator | ++ export TEMPEST=true 2026-03-29 01:30:20.050606 | orchestrator | ++ TEMPEST=true 2026-03-29 01:30:20.050610 | orchestrator | ++ export IS_ZUUL=true 2026-03-29 01:30:20.050614 | orchestrator | ++ IS_ZUUL=true 2026-03-29 01:30:20.050618 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 01:30:20.050622 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.231 2026-03-29 01:30:20.050626 | orchestrator | ++ export EXTERNAL_API=false 2026-03-29 01:30:20.050630 | orchestrator | ++ EXTERNAL_API=false 2026-03-29 01:30:20.050634 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-29 01:30:20.050638 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-29 01:30:20.050642 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-29 01:30:20.050646 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-29 01:30:20.050650 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-29 01:30:20.050654 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-29 01:30:20.050658 | orchestrator | + echo 2026-03-29 01:30:20.050662 | orchestrator | + echo '# Tempest' 2026-03-29 01:30:20.050666 | orchestrator | + echo 2026-03-29 01:30:20.050959 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-03-29 01:30:20.050965 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-03-29 01:30:32.124183 | orchestrator | 2026-03-29 01:30:32 | INFO  | Task c3bf29f1-71a0-4fd6-9a39-4281beab827e (tempest) was prepared for execution. 2026-03-29 01:30:32.124307 | orchestrator | 2026-03-29 01:30:32 | INFO  | It takes a moment until task c3bf29f1-71a0-4fd6-9a39-4281beab827e (tempest) has been started and output is visible here. 2026-03-29 01:31:49.010171 | orchestrator | 2026-03-29 01:31:49.010246 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-03-29 01:31:49.010259 | orchestrator | 2026-03-29 01:31:49.010268 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-03-29 01:31:49.010283 | orchestrator | Sunday 29 March 2026 01:30:36 +0000 (0:00:00.237) 0:00:00.237 ********** 2026-03-29 01:31:49.010289 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.010296 | orchestrator | 2026-03-29 01:31:49.010302 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-03-29 01:31:49.010319 | orchestrator | Sunday 29 March 2026 01:30:37 +0000 (0:00:00.706) 0:00:00.944 ********** 2026-03-29 01:31:49.010326 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.010332 | orchestrator | 2026-03-29 01:31:49.010339 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-03-29 01:31:49.010345 | orchestrator | Sunday 29 March 2026 01:30:38 +0000 (0:00:01.283) 0:00:02.228 ********** 2026-03-29 01:31:49.010351 | orchestrator | ok: [testbed-manager] 2026-03-29 01:31:49.010357 | orchestrator | 2026-03-29 01:31:49.010362 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-03-29 01:31:49.010369 | orchestrator | Sunday 29 March 2026 01:30:38 +0000 (0:00:00.440) 0:00:02.668 ********** 2026-03-29 01:31:49.010374 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.010380 | orchestrator | 2026-03-29 01:31:49.010386 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-03-29 01:31:49.010392 | orchestrator | Sunday 29 March 2026 01:31:00 +0000 (0:00:21.856) 0:00:24.524 ********** 2026-03-29 01:31:49.010408 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-03-29 01:31:49.010435 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-03-29 01:31:49.010442 | orchestrator | 2026-03-29 01:31:49.010451 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-03-29 01:31:49.010457 | orchestrator | Sunday 29 March 2026 01:31:08 +0000 (0:00:07.652) 0:00:32.177 ********** 2026-03-29 01:31:49.010463 | orchestrator | ok: [testbed-manager] => { 2026-03-29 01:31:49.010469 | orchestrator |  "changed": false, 2026-03-29 01:31:49.010474 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:31:49.010480 | orchestrator | } 2026-03-29 01:31:49.010486 | orchestrator | 2026-03-29 01:31:49.010492 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-03-29 01:31:49.010498 | orchestrator | Sunday 29 March 2026 01:31:08 +0000 (0:00:00.153) 0:00:32.330 ********** 2026-03-29 01:31:49.010505 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010510 | orchestrator | 2026-03-29 01:31:49.010516 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-03-29 01:31:49.010522 | orchestrator | Sunday 29 March 2026 01:31:12 +0000 (0:00:03.452) 0:00:35.783 ********** 2026-03-29 01:31:49.010528 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010535 | orchestrator | 2026-03-29 01:31:49.010541 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-03-29 01:31:49.010547 | orchestrator | Sunday 29 March 2026 01:31:13 +0000 (0:00:01.680) 0:00:37.463 ********** 2026-03-29 01:31:49.010554 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010560 | orchestrator | 2026-03-29 01:31:49.010566 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-03-29 01:31:49.010573 | orchestrator | Sunday 29 March 2026 01:31:17 +0000 (0:00:03.479) 0:00:40.943 ********** 2026-03-29 01:31:49.010579 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010586 | orchestrator | 2026-03-29 01:31:49.010592 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-03-29 01:31:49.010598 | orchestrator | Sunday 29 March 2026 01:31:17 +0000 (0:00:00.181) 0:00:41.124 ********** 2026-03-29 01:31:49.010605 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.010611 | orchestrator | 2026-03-29 01:31:49.010618 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-03-29 01:31:49.010625 | orchestrator | Sunday 29 March 2026 01:31:19 +0000 (0:00:02.444) 0:00:43.568 ********** 2026-03-29 01:31:49.010632 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.010638 | orchestrator | 2026-03-29 01:31:49.010645 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-03-29 01:31:49.010651 | orchestrator | Sunday 29 March 2026 01:31:29 +0000 (0:00:09.929) 0:00:53.497 ********** 2026-03-29 01:31:49.010658 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.010664 | orchestrator | 2026-03-29 01:31:49.010670 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-03-29 01:31:49.010676 | orchestrator | Sunday 29 March 2026 01:31:30 +0000 (0:00:00.783) 0:00:54.281 ********** 2026-03-29 01:31:49.010682 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010689 | orchestrator | 2026-03-29 01:31:49.010695 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-03-29 01:31:49.010702 | orchestrator | Sunday 29 March 2026 01:31:32 +0000 (0:00:01.499) 0:00:55.781 ********** 2026-03-29 01:31:49.010708 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010715 | orchestrator | 2026-03-29 01:31:49.010721 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-03-29 01:31:49.010728 | orchestrator | Sunday 29 March 2026 01:31:33 +0000 (0:00:01.514) 0:00:57.295 ********** 2026-03-29 01:31:49.010741 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010749 | orchestrator | 2026-03-29 01:31:49.010755 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-03-29 01:31:49.010761 | orchestrator | Sunday 29 March 2026 01:31:33 +0000 (0:00:00.185) 0:00:57.481 ********** 2026-03-29 01:31:49.010781 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010788 | orchestrator | 2026-03-29 01:31:49.010795 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-03-29 01:31:49.010807 | orchestrator | Sunday 29 March 2026 01:31:33 +0000 (0:00:00.195) 0:00:57.677 ********** 2026-03-29 01:31:49.010814 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-29 01:31:49.010821 | orchestrator | 2026-03-29 01:31:49.010828 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-03-29 01:31:49.010848 | orchestrator | Sunday 29 March 2026 01:31:37 +0000 (0:00:03.711) 0:01:01.388 ********** 2026-03-29 01:31:49.010855 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-03-29 01:31:49.010861 | orchestrator |  "changed": false, 2026-03-29 01:31:49.010868 | orchestrator |  "msg": "All assertions passed" 2026-03-29 01:31:49.010875 | orchestrator | } 2026-03-29 01:31:49.010882 | orchestrator | 2026-03-29 01:31:49.010889 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-03-29 01:31:49.010896 | orchestrator | Sunday 29 March 2026 01:31:37 +0000 (0:00:00.184) 0:01:01.573 ********** 2026-03-29 01:31:49.010903 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-03-29 01:31:49.010911 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-03-29 01:31:49.010917 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:31:49.010924 | orchestrator | 2026-03-29 01:31:49.010930 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-03-29 01:31:49.010937 | orchestrator | Sunday 29 March 2026 01:31:38 +0000 (0:00:00.439) 0:01:02.012 ********** 2026-03-29 01:31:49.010943 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:31:49.010948 | orchestrator | 2026-03-29 01:31:49.010954 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-03-29 01:31:49.010961 | orchestrator | Sunday 29 March 2026 01:31:38 +0000 (0:00:00.163) 0:01:02.175 ********** 2026-03-29 01:31:49.010968 | orchestrator | ok: [testbed-manager] 2026-03-29 01:31:49.010975 | orchestrator | 2026-03-29 01:31:49.010981 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-03-29 01:31:49.010987 | orchestrator | Sunday 29 March 2026 01:31:38 +0000 (0:00:00.475) 0:01:02.651 ********** 2026-03-29 01:31:49.010994 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.011001 | orchestrator | 2026-03-29 01:31:49.011007 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-03-29 01:31:49.011014 | orchestrator | Sunday 29 March 2026 01:31:39 +0000 (0:00:00.887) 0:01:03.539 ********** 2026-03-29 01:31:49.011021 | orchestrator | ok: [testbed-manager] 2026-03-29 01:31:49.011028 | orchestrator | 2026-03-29 01:31:49.011034 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-03-29 01:31:49.011040 | orchestrator | Sunday 29 March 2026 01:31:40 +0000 (0:00:00.499) 0:01:04.039 ********** 2026-03-29 01:31:49.011047 | orchestrator | skipping: [testbed-manager] 2026-03-29 01:31:49.011054 | orchestrator | 2026-03-29 01:31:49.011060 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-03-29 01:31:49.011067 | orchestrator | Sunday 29 March 2026 01:31:40 +0000 (0:00:00.138) 0:01:04.177 ********** 2026-03-29 01:31:49.011074 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-03-29 01:31:49.011080 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-03-29 01:31:49.011087 | orchestrator | 2026-03-29 01:31:49.011094 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-03-29 01:31:49.011101 | orchestrator | Sunday 29 March 2026 01:31:47 +0000 (0:00:07.565) 0:01:11.742 ********** 2026-03-29 01:31:49.011108 | orchestrator | changed: [testbed-manager] 2026-03-29 01:31:49.011114 | orchestrator | 2026-03-29 01:31:49.011120 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-29 01:31:49.011131 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-29 01:31:49.011138 | orchestrator | 2026-03-29 01:31:49.011145 | orchestrator | 2026-03-29 01:31:49.011151 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-29 01:31:49.011158 | orchestrator | Sunday 29 March 2026 01:31:48 +0000 (0:00:01.004) 0:01:12.747 ********** 2026-03-29 01:31:49.011165 | orchestrator | =============================================================================== 2026-03-29 01:31:49.011171 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 21.86s 2026-03-29 01:31:49.011178 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 9.93s 2026-03-29 01:31:49.011184 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 7.65s 2026-03-29 01:31:49.011191 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.57s 2026-03-29 01:31:49.011197 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.71s 2026-03-29 01:31:49.011204 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.48s 2026-03-29 01:31:49.011211 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.45s 2026-03-29 01:31:49.011217 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.44s 2026-03-29 01:31:49.011224 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.68s 2026-03-29 01:31:49.011231 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.51s 2026-03-29 01:31:49.011241 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.50s 2026-03-29 01:31:49.011248 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.28s 2026-03-29 01:31:49.011255 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 1.00s 2026-03-29 01:31:49.011262 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.89s 2026-03-29 01:31:49.011269 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.78s 2026-03-29 01:31:49.011275 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 0.71s 2026-03-29 01:31:49.011282 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.50s 2026-03-29 01:31:49.011292 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.48s 2026-03-29 01:31:49.372990 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.44s 2026-03-29 01:31:49.373069 | orchestrator | osism.validations.tempest : Resolve flavor IDs -------------------------- 0.44s 2026-03-29 01:31:49.661305 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-03-29 01:31:49.666744 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-03-29 01:31:49.668775 | orchestrator | 2026-03-29 01:31:49.668860 | orchestrator | ## IDENTITY (API) 2026-03-29 01:31:49.668877 | orchestrator | 2026-03-29 01:31:49.668884 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-29 01:31:49.668891 | orchestrator | + echo 2026-03-29 01:31:49.668899 | orchestrator | + echo '## IDENTITY (API)' 2026-03-29 01:31:49.668903 | orchestrator | + echo 2026-03-29 01:31:49.668907 | orchestrator | + _tempest tempest.api.identity.v3 2026-03-29 01:31:49.668912 | orchestrator | + local regex=tempest.api.identity.v3 2026-03-29 01:31:49.669521 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-03-29 01:31:49.670137 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:31:49.674360 | orchestrator | + tee -a /opt/tempest/20260329-0131.log 2026-03-29 01:31:53.328451 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:31:53.328544 | orchestrator | Did you mean one of these? 2026-03-29 01:31:53.328581 | orchestrator | help 2026-03-29 01:31:53.328589 | orchestrator | init 2026-03-29 01:31:53.677486 | orchestrator | 2026-03-29 01:31:53.677539 | orchestrator | ## IMAGE (API) 2026-03-29 01:31:53.677546 | orchestrator | 2026-03-29 01:31:53.677550 | orchestrator | + echo 2026-03-29 01:31:53.677554 | orchestrator | + echo '## IMAGE (API)' 2026-03-29 01:31:53.677559 | orchestrator | + echo 2026-03-29 01:31:53.677563 | orchestrator | + _tempest tempest.api.image.v2 2026-03-29 01:31:53.677567 | orchestrator | + local regex=tempest.api.image.v2 2026-03-29 01:31:53.678292 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-03-29 01:31:53.679044 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:31:53.681215 | orchestrator | + tee -a /opt/tempest/20260329-0131.log 2026-03-29 01:31:57.376351 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:31:57.376478 | orchestrator | Did you mean one of these? 2026-03-29 01:31:57.376491 | orchestrator | help 2026-03-29 01:31:57.376498 | orchestrator | init 2026-03-29 01:31:57.711391 | orchestrator | 2026-03-29 01:31:57.711445 | orchestrator | ## NETWORK (API) 2026-03-29 01:31:57.711452 | orchestrator | 2026-03-29 01:31:57.711456 | orchestrator | + echo 2026-03-29 01:31:57.711460 | orchestrator | + echo '## NETWORK (API)' 2026-03-29 01:31:57.711473 | orchestrator | + echo 2026-03-29 01:31:57.711482 | orchestrator | + _tempest tempest.api.network 2026-03-29 01:31:57.711487 | orchestrator | + local regex=tempest.api.network 2026-03-29 01:31:57.712539 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-03-29 01:31:57.712733 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:31:57.714383 | orchestrator | + tee -a /opt/tempest/20260329-0131.log 2026-03-29 01:32:01.325970 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:32:01.326040 | orchestrator | Did you mean one of these? 2026-03-29 01:32:01.326047 | orchestrator | help 2026-03-29 01:32:01.326050 | orchestrator | init 2026-03-29 01:32:01.695713 | orchestrator | 2026-03-29 01:32:01.695776 | orchestrator | ## VOLUME (API) 2026-03-29 01:32:01.695786 | orchestrator | 2026-03-29 01:32:01.695793 | orchestrator | + echo 2026-03-29 01:32:01.695801 | orchestrator | + echo '## VOLUME (API)' 2026-03-29 01:32:01.695809 | orchestrator | + echo 2026-03-29 01:32:01.695815 | orchestrator | + _tempest tempest.api.volume 2026-03-29 01:32:01.695822 | orchestrator | + local regex=tempest.api.volume 2026-03-29 01:32:01.695970 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-03-29 01:32:01.696587 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:32:01.699073 | orchestrator | + tee -a /opt/tempest/20260329-0132.log 2026-03-29 01:32:05.371127 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:32:05.371204 | orchestrator | Did you mean one of these? 2026-03-29 01:32:05.371211 | orchestrator | help 2026-03-29 01:32:05.371216 | orchestrator | init 2026-03-29 01:32:05.794917 | orchestrator | 2026-03-29 01:32:05.795002 | orchestrator | ## COMPUTE (API) 2026-03-29 01:32:05.795054 | orchestrator | 2026-03-29 01:32:05.795064 | orchestrator | + echo 2026-03-29 01:32:05.795071 | orchestrator | + echo '## COMPUTE (API)' 2026-03-29 01:32:05.795079 | orchestrator | + echo 2026-03-29 01:32:05.795086 | orchestrator | + _tempest tempest.api.compute 2026-03-29 01:32:05.795094 | orchestrator | + local regex=tempest.api.compute 2026-03-29 01:32:05.795179 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-03-29 01:32:05.796488 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:32:05.798781 | orchestrator | + tee -a /opt/tempest/20260329-0132.log 2026-03-29 01:32:09.517689 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:32:09.517774 | orchestrator | Did you mean one of these? 2026-03-29 01:32:09.517786 | orchestrator | help 2026-03-29 01:32:09.517794 | orchestrator | init 2026-03-29 01:32:09.892020 | orchestrator | 2026-03-29 01:32:09.892099 | orchestrator | ## DNS (API) 2026-03-29 01:32:09.892106 | orchestrator | 2026-03-29 01:32:09.892110 | orchestrator | + echo 2026-03-29 01:32:09.892115 | orchestrator | + echo '## DNS (API)' 2026-03-29 01:32:09.892120 | orchestrator | + echo 2026-03-29 01:32:09.892124 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-03-29 01:32:09.892129 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-03-29 01:32:09.892575 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-03-29 01:32:09.893724 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:32:09.896693 | orchestrator | + tee -a /opt/tempest/20260329-0132.log 2026-03-29 01:32:13.477225 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:32:13.477314 | orchestrator | Did you mean one of these? 2026-03-29 01:32:13.477372 | orchestrator | help 2026-03-29 01:32:13.477382 | orchestrator | init 2026-03-29 01:32:13.841677 | orchestrator | 2026-03-29 01:32:13.841724 | orchestrator | ## OBJECT-STORE (API) 2026-03-29 01:32:13.841730 | orchestrator | 2026-03-29 01:32:13.841734 | orchestrator | + echo 2026-03-29 01:32:13.841738 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-03-29 01:32:13.841744 | orchestrator | + echo 2026-03-29 01:32:13.841751 | orchestrator | + _tempest tempest.api.object_storage 2026-03-29 01:32:13.841981 | orchestrator | + local regex=tempest.api.object_storage 2026-03-29 01:32:13.842576 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-03-29 01:32:13.844466 | orchestrator | ++ date +%Y%m%d-%H%M 2026-03-29 01:32:13.846560 | orchestrator | + tee -a /opt/tempest/20260329-0132.log 2026-03-29 01:32:17.530222 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-03-29 01:32:17.530367 | orchestrator | Did you mean one of these? 2026-03-29 01:32:17.530382 | orchestrator | help 2026-03-29 01:32:17.530389 | orchestrator | init 2026-03-29 01:32:18.357790 | orchestrator | ok: Runtime: 0:01:58.254207 2026-03-29 01:32:18.378597 | 2026-03-29 01:32:18.378757 | TASK [Check prometheus alert status] 2026-03-29 01:32:18.917605 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:18.921298 | 2026-03-29 01:32:18.921474 | PLAY RECAP 2026-03-29 01:32:18.921698 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-03-29 01:32:18.921776 | 2026-03-29 01:32:19.144823 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-03-29 01:32:19.147186 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-29 01:32:19.918649 | 2026-03-29 01:32:19.918827 | PLAY [Post output play] 2026-03-29 01:32:19.936986 | 2026-03-29 01:32:19.937141 | LOOP [stage-output : Register sources] 2026-03-29 01:32:19.999900 | 2026-03-29 01:32:20.000138 | TASK [stage-output : Check sudo] 2026-03-29 01:32:20.838279 | orchestrator | sudo: a password is required 2026-03-29 01:32:21.037811 | orchestrator | ok: Runtime: 0:00:00.014912 2026-03-29 01:32:21.058705 | 2026-03-29 01:32:21.058955 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-29 01:32:21.102884 | 2026-03-29 01:32:21.103195 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-29 01:32:21.173794 | orchestrator | ok 2026-03-29 01:32:21.184676 | 2026-03-29 01:32:21.184830 | LOOP [stage-output : Ensure target folders exist] 2026-03-29 01:32:21.628825 | orchestrator | ok: "docs" 2026-03-29 01:32:21.629350 | 2026-03-29 01:32:21.876581 | orchestrator | ok: "artifacts" 2026-03-29 01:32:22.173395 | orchestrator | ok: "logs" 2026-03-29 01:32:22.195822 | 2026-03-29 01:32:22.196010 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-29 01:32:22.234691 | 2026-03-29 01:32:22.235014 | TASK [stage-output : Make all log files readable] 2026-03-29 01:32:22.514103 | orchestrator | ok 2026-03-29 01:32:22.523374 | 2026-03-29 01:32:22.523578 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-29 01:32:22.568875 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:22.588683 | 2026-03-29 01:32:22.588876 | TASK [stage-output : Discover log files for compression] 2026-03-29 01:32:22.613979 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:22.629049 | 2026-03-29 01:32:22.629236 | LOOP [stage-output : Archive everything from logs] 2026-03-29 01:32:22.687948 | 2026-03-29 01:32:22.688154 | PLAY [Post cleanup play] 2026-03-29 01:32:22.697559 | 2026-03-29 01:32:22.697681 | TASK [Set cloud fact (Zuul deployment)] 2026-03-29 01:32:22.755924 | orchestrator | ok 2026-03-29 01:32:22.768870 | 2026-03-29 01:32:22.769022 | TASK [Set cloud fact (local deployment)] 2026-03-29 01:32:22.814165 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:22.834248 | 2026-03-29 01:32:22.834434 | TASK [Clean the cloud environment] 2026-03-29 01:32:24.088211 | orchestrator | 2026-03-29 01:32:24 - clean up servers 2026-03-29 01:32:24.919925 | orchestrator | 2026-03-29 01:32:24 - testbed-manager 2026-03-29 01:32:24.999995 | orchestrator | 2026-03-29 01:32:24 - testbed-node-5 2026-03-29 01:32:25.087160 | orchestrator | 2026-03-29 01:32:25 - testbed-node-2 2026-03-29 01:32:25.167979 | orchestrator | 2026-03-29 01:32:25 - testbed-node-0 2026-03-29 01:32:25.249123 | orchestrator | 2026-03-29 01:32:25 - testbed-node-3 2026-03-29 01:32:25.344937 | orchestrator | 2026-03-29 01:32:25 - testbed-node-1 2026-03-29 01:32:25.430686 | orchestrator | 2026-03-29 01:32:25 - testbed-node-4 2026-03-29 01:32:25.519285 | orchestrator | 2026-03-29 01:32:25 - clean up keypairs 2026-03-29 01:32:25.534874 | orchestrator | 2026-03-29 01:32:25 - testbed 2026-03-29 01:32:25.554910 | orchestrator | 2026-03-29 01:32:25 - wait for servers to be gone 2026-03-29 01:32:36.467221 | orchestrator | 2026-03-29 01:32:36 - clean up ports 2026-03-29 01:32:36.661030 | orchestrator | 2026-03-29 01:32:36 - 227abbe4-5e61-46b5-8ae8-cb075b264f0d 2026-03-29 01:32:36.896945 | orchestrator | 2026-03-29 01:32:36 - 30d89671-fdfb-4db4-95c4-dbdcf7a93901 2026-03-29 01:32:37.150428 | orchestrator | 2026-03-29 01:32:37 - 660948fb-ab79-4bef-a155-e6072c5eb8fe 2026-03-29 01:32:37.405404 | orchestrator | 2026-03-29 01:32:37 - 84cf7f1e-052b-4e6a-bd54-1b61b7b52d91 2026-03-29 01:32:37.661979 | orchestrator | 2026-03-29 01:32:37 - 9978e08b-77b6-4705-b5c8-af0241794e75 2026-03-29 01:32:37.928595 | orchestrator | 2026-03-29 01:32:37 - bdb6ba17-471c-44fb-a72a-1eb8573cffa1 2026-03-29 01:32:38.147898 | orchestrator | 2026-03-29 01:32:38 - e676f74a-63a3-423d-91dc-bd75f9e3b7f3 2026-03-29 01:32:38.569869 | orchestrator | 2026-03-29 01:32:38 - clean up volumes 2026-03-29 01:32:38.676139 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-0-node-base 2026-03-29 01:32:38.716594 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-1-node-base 2026-03-29 01:32:38.761102 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-2-node-base 2026-03-29 01:32:38.800013 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-4-node-base 2026-03-29 01:32:38.838089 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-3-node-base 2026-03-29 01:32:38.877877 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-5-node-base 2026-03-29 01:32:38.916359 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-manager-base 2026-03-29 01:32:38.955119 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-3-node-3 2026-03-29 01:32:38.993456 | orchestrator | 2026-03-29 01:32:38 - testbed-volume-8-node-5 2026-03-29 01:32:39.033936 | orchestrator | 2026-03-29 01:32:39 - testbed-volume-7-node-4 2026-03-29 01:32:39.073682 | orchestrator | 2026-03-29 01:32:39 - testbed-volume-0-node-3 2026-03-29 01:32:39.109596 | orchestrator | 2026-03-29 01:32:39 - testbed-volume-1-node-4 2026-03-29 01:32:39.147882 | orchestrator | 2026-03-29 01:32:39 - testbed-volume-4-node-4 2026-03-29 01:32:39.185485 | orchestrator | 2026-03-29 01:32:39 - testbed-volume-2-node-5 2026-03-29 01:32:39.223769 | orchestrator | 2026-03-29 01:32:39 - testbed-volume-6-node-3 2026-03-29 01:32:39.261489 | orchestrator | 2026-03-29 01:32:39 - testbed-volume-5-node-5 2026-03-29 01:32:39.299102 | orchestrator | 2026-03-29 01:32:39 - disconnect routers 2026-03-29 01:32:39.417651 | orchestrator | 2026-03-29 01:32:39 - testbed 2026-03-29 01:32:40.417305 | orchestrator | 2026-03-29 01:32:40 - clean up subnets 2026-03-29 01:32:40.467873 | orchestrator | 2026-03-29 01:32:40 - subnet-testbed-management 2026-03-29 01:32:40.636865 | orchestrator | 2026-03-29 01:32:40 - clean up networks 2026-03-29 01:32:41.327757 | orchestrator | 2026-03-29 01:32:41 - net-testbed-management 2026-03-29 01:32:41.632620 | orchestrator | 2026-03-29 01:32:41 - clean up security groups 2026-03-29 01:32:41.680545 | orchestrator | 2026-03-29 01:32:41 - testbed-management 2026-03-29 01:32:41.842849 | orchestrator | 2026-03-29 01:32:41 - testbed-node 2026-03-29 01:32:41.969671 | orchestrator | 2026-03-29 01:32:41 - clean up floating ips 2026-03-29 01:32:42.002222 | orchestrator | 2026-03-29 01:32:42 - 81.163.192.231 2026-03-29 01:32:42.364255 | orchestrator | 2026-03-29 01:32:42 - clean up routers 2026-03-29 01:32:42.536219 | orchestrator | 2026-03-29 01:32:42 - testbed 2026-03-29 01:32:43.894459 | orchestrator | ok: Runtime: 0:00:20.284979 2026-03-29 01:32:43.898277 | 2026-03-29 01:32:43.898437 | PLAY RECAP 2026-03-29 01:32:43.898597 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-29 01:32:43.898660 | 2026-03-29 01:32:44.039025 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-29 01:32:44.040082 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-29 01:32:44.810754 | 2026-03-29 01:32:44.810952 | PLAY [Cleanup play] 2026-03-29 01:32:44.828108 | 2026-03-29 01:32:44.828259 | TASK [Set cloud fact (Zuul deployment)] 2026-03-29 01:32:44.884875 | orchestrator | ok 2026-03-29 01:32:44.893638 | 2026-03-29 01:32:44.893897 | TASK [Set cloud fact (local deployment)] 2026-03-29 01:32:44.918643 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:44.929628 | 2026-03-29 01:32:44.929763 | TASK [Clean the cloud environment] 2026-03-29 01:32:46.279284 | orchestrator | 2026-03-29 01:32:46 - clean up servers 2026-03-29 01:32:46.785277 | orchestrator | 2026-03-29 01:32:46 - clean up keypairs 2026-03-29 01:32:46.799642 | orchestrator | 2026-03-29 01:32:46 - wait for servers to be gone 2026-03-29 01:32:46.839763 | orchestrator | 2026-03-29 01:32:46 - clean up ports 2026-03-29 01:32:46.942112 | orchestrator | 2026-03-29 01:32:46 - clean up volumes 2026-03-29 01:32:47.026967 | orchestrator | 2026-03-29 01:32:47 - disconnect routers 2026-03-29 01:32:47.061061 | orchestrator | 2026-03-29 01:32:47 - clean up subnets 2026-03-29 01:32:47.087447 | orchestrator | 2026-03-29 01:32:47 - clean up networks 2026-03-29 01:32:47.217980 | orchestrator | 2026-03-29 01:32:47 - clean up security groups 2026-03-29 01:32:47.253262 | orchestrator | 2026-03-29 01:32:47 - clean up floating ips 2026-03-29 01:32:47.274713 | orchestrator | 2026-03-29 01:32:47 - clean up routers 2026-03-29 01:32:47.477115 | orchestrator | ok: Runtime: 0:00:01.615991 2026-03-29 01:32:47.479606 | 2026-03-29 01:32:47.479711 | PLAY RECAP 2026-03-29 01:32:47.479776 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-29 01:32:47.479808 | 2026-03-29 01:32:47.613826 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-29 01:32:47.616419 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-29 01:32:48.362322 | 2026-03-29 01:32:48.362489 | PLAY [Base post-fetch] 2026-03-29 01:32:48.378153 | 2026-03-29 01:32:48.378290 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-29 01:32:48.433482 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:48.446462 | 2026-03-29 01:32:48.446706 | TASK [fetch-output : Set log path for single node] 2026-03-29 01:32:48.496975 | orchestrator | ok 2026-03-29 01:32:48.505956 | 2026-03-29 01:32:48.506091 | LOOP [fetch-output : Ensure local output dirs] 2026-03-29 01:32:48.994603 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/work/logs" 2026-03-29 01:32:49.289117 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/work/artifacts" 2026-03-29 01:32:49.578669 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b6f2ba222d6b4c61a0a2e9d3c483dd72/work/docs" 2026-03-29 01:32:49.604453 | 2026-03-29 01:32:49.604655 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-29 01:32:50.567785 | orchestrator | changed: .d..t...... ./ 2026-03-29 01:32:50.568182 | orchestrator | changed: All items complete 2026-03-29 01:32:50.568247 | 2026-03-29 01:32:51.301172 | orchestrator | changed: .d..t...... ./ 2026-03-29 01:32:52.032965 | orchestrator | changed: .d..t...... ./ 2026-03-29 01:32:52.061947 | 2026-03-29 01:32:52.062097 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-29 01:32:52.099677 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:52.104030 | orchestrator | skipping: Conditional result was False 2026-03-29 01:32:52.120001 | 2026-03-29 01:32:52.120156 | PLAY RECAP 2026-03-29 01:32:52.120235 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-29 01:32:52.120272 | 2026-03-29 01:32:52.247057 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-29 01:32:52.251133 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-29 01:32:53.010332 | 2026-03-29 01:32:53.010508 | PLAY [Base post] 2026-03-29 01:32:53.025754 | 2026-03-29 01:32:53.025909 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-29 01:32:54.465726 | orchestrator | changed 2026-03-29 01:32:54.473331 | 2026-03-29 01:32:54.473451 | PLAY RECAP 2026-03-29 01:32:54.473516 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-29 01:32:54.473597 | 2026-03-29 01:32:54.593344 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-29 01:32:54.594378 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-29 01:32:55.368822 | 2026-03-29 01:32:55.368997 | PLAY [Base post-logs] 2026-03-29 01:32:55.379758 | 2026-03-29 01:32:55.379905 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-29 01:32:55.876619 | localhost | changed 2026-03-29 01:32:55.887620 | 2026-03-29 01:32:55.887771 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-29 01:32:55.923320 | localhost | ok 2026-03-29 01:32:55.926913 | 2026-03-29 01:32:55.927023 | TASK [Set zuul-log-path fact] 2026-03-29 01:32:55.942175 | localhost | ok 2026-03-29 01:32:55.951048 | 2026-03-29 01:32:55.951157 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-29 01:32:55.976704 | localhost | ok 2026-03-29 01:32:55.980473 | 2026-03-29 01:32:55.980617 | TASK [upload-logs : Create log directories] 2026-03-29 01:32:56.504045 | localhost | changed 2026-03-29 01:32:56.509645 | 2026-03-29 01:32:56.509822 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-29 01:32:57.025806 | localhost -> localhost | ok: Runtime: 0:00:00.006505 2026-03-29 01:32:57.030504 | 2026-03-29 01:32:57.030662 | TASK [upload-logs : Upload logs to log server] 2026-03-29 01:32:57.609214 | localhost | Output suppressed because no_log was given 2026-03-29 01:32:57.612925 | 2026-03-29 01:32:57.613098 | LOOP [upload-logs : Compress console log and json output] 2026-03-29 01:32:57.675168 | localhost | skipping: Conditional result was False 2026-03-29 01:32:57.679768 | localhost | skipping: Conditional result was False 2026-03-29 01:32:57.695494 | 2026-03-29 01:32:57.695746 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-29 01:32:57.742421 | localhost | skipping: Conditional result was False 2026-03-29 01:32:57.743070 | 2026-03-29 01:32:57.746563 | localhost | skipping: Conditional result was False 2026-03-29 01:32:57.760485 | 2026-03-29 01:32:57.760804 | LOOP [upload-logs : Upload console log and json output]